Change Readiness as Binding Constraint

Why Transformation Fails Before It Starts

Every healthcare transformation initiative carries an implicit assumption: the organization can absorb this change. The assumption is rarely tested. Project plans document timelines, milestones, resource allocations, and governance structures. They almost never document whether the workforce is psychologically prepared to implement the change, whether leadership commitment will survive the first quarter of difficulty, whether the infrastructure can support the new workflows, or whether the organization’s history of prior changes has left sufficient capacity for another one. The initiative launches. Resistance emerges. Workarounds proliferate. Adoption stalls. The post-mortem blames the initiative — the technology was wrong, the vendor underperformed, the timeline was too aggressive. But the initiative was rarely the problem. The soil was.

Organizational readiness for change is not a vague cultural sentiment. It is a measurable, multi-dimensional construct with predictive validity for implementation outcomes. Deploying a transformation initiative into an unready organization is the most expensive way to discover that readiness was the binding constraint — because by the time the failure is visible, the grant period is half spent, the workforce is more fatigued than before, and the trust required for the next attempt has been further eroded. The alternative is to assess readiness before deployment and, when readiness is insufficient, invest in readiness-building rather than initiative-launching. This is not caution. It is operational discipline.


Change Readiness Defined: Two Conditions, Both Required

Bryan Weiner’s (2009) theory of organizational readiness for change provides the most operationally useful framework. Weiner defines organizational readiness as a shared psychological state in which organizational members are both committed to implementing a change and confident in their collective ability to do so. The model identifies two necessary components:

Change valence — the degree to which organizational members believe the change is needed, beneficial, and worth the disruption it will cause. Valence is the motivational component: Do people believe this change matters? Do they see the problem it addresses as real? Do they believe the proposed solution is the right one? High valence means the workforce looks at the initiative and thinks, “Yes, we need this.” Low valence means they think, “Why are we doing this?” or, more corrosively, “This is a solution looking for a problem.”

Change efficacy — the degree to which organizational members believe the organization has the collective capability to implement the change successfully. Efficacy is the confidence component: Do people believe we can pull this off? Do they believe the resources, skills, leadership, and infrastructure are adequate? High efficacy means the workforce looks at the initiative and thinks, “This will be hard, but we can do it.” Low efficacy means they think, “There is no way this works here” — regardless of whether they agree the change is needed.

Both must be present. An organization with high valence but low efficacy agrees the change is necessary but does not believe it can succeed — producing demoralization and token compliance. An organization with high efficacy but low valence believes it could succeed but sees no reason to try — producing indifference and passive non-adoption. An organization with low valence and low efficacy produces active resistance. Only when both are present — when people believe the change matters and believe the organization can execute it — does readiness exist.

This two-component structure explains a failure pattern that frustrates healthcare leaders: the initiative that everyone agreed was a good idea but nobody adopted. Agreement is valence. Adoption requires efficacy. A workforce that believes care coordination is important but does not believe the organization can implement it — because the last three IT projects failed, because staffing is too thin, because leadership attention will evaporate after the board presentation — will agree in meetings and resist in practice. The gap between stated support and actual adoption is the gap between valence and efficacy.

Armenakis, Harris, and Mossholder (1993) laid the conceptual groundwork for readiness assessment by identifying five components of the readiness message that organizational members must internalize: discrepancy (is there a gap between current state and desired state?), appropriateness (is this the right response to that gap?), efficacy (can we do this?), principal support (are the people in charge behind this?), and personal valence (what is in it for me?). Holt, Armenakis, Feild, and Harris (2007) operationalized this framework into a validated measurement instrument — the Readiness for Organizational Change scale — demonstrating that readiness is not just a concept but a measurable state with reliable psychometric properties.


The Five Assessment Dimensions

Readiness is not a single score. It is a composite of at least five dimensions, each of which can independently constrain implementation success.

Leadership Commitment

Not whether leaders endorsed the initiative — endorsement is cheap. Whether leadership commitment is visible, sustained, and resourced. Visible means frontline staff can see that leadership is paying attention, attending go-live events, asking about progress, and making decisions that prioritize the initiative. Sustained means the commitment persists past the announcement, through the messy middle, and into the period where other priorities compete for attention. Resourced means leadership has allocated budget, staffing, and protected time — not just approved the concept.

The most common failure mode for leadership commitment is decay. The CEO champions a grant application, secures the funding, announces the initiative at an all-hands meeting, and then turns attention to the next strategic priority. The initiative is delegated to a project manager two levels down who lacks the authority to resolve cross-departmental conflicts, reallocate resources, or override competing priorities. The workforce notices. The signal they receive is: this is not important enough for leadership to stay involved. That signal is fatal to efficacy — if leadership does not believe this deserves their sustained attention, why should anyone else believe it will succeed?

Workforce Capacity

Time, skills, and bandwidth. This is the dimension most often ignored because it is the most uncomfortable to confront. Workforce capacity asks: Does the staff who must implement this change have the time to learn it, practice it, and integrate it into their workflows — without sacrificing patient care or burning out in the process?

This connects directly to Workforce Module 2. A workforce already at 90% utilization has no slack to absorb a new initiative. Every hour spent in training, every workflow adjustment during the learning curve, every workaround during the go-live period comes at the expense of something else — patient throughput, documentation completion, break time, or the personal reserves that keep burnout at bay. Organizations that launch transformation initiatives into capacity-depleted workforces are not asking staff to adopt a change. They are asking staff to absorb additional cognitive and operational load on top of a workload that was already unsustainable. The predictable result is not resistance — it is something worse: exhausted compliance that degrades both the initiative and the baseline work.

The connection to Human Factors Module 2 is mechanistic. Cognitive load theory (Sweller, 1988) establishes that learning and integrating new workflows imposes germane cognitive load on top of the intrinsic load of clinical work. When total cognitive load exceeds working memory capacity, performance degrades on both the new task and the existing ones. A nurse learning a new care coordination workflow while managing a full patient assignment is not just slower at the new workflow — she is slower and less accurate at everything. This is not a motivation problem. It is a cognitive architecture problem.

Infrastructure Support

The systems, tools, and organizational structures required to support the change. For technology-dependent initiatives — which most healthcare transformations now are — this means the EHR is capable, the interfaces work, the network is reliable, and the hardware is in place. But infrastructure extends beyond technology: Does the training infrastructure exist? Are there enough trainers, training environments, and training hours? Is supervision available during the transition? Are the policies and procedures updated before go-live, not after?

Infrastructure readiness is the dimension most susceptible to optimism bias. Project teams assume the EHR can be configured, the interfaces will work, and the workflows can be redesigned — because these are technical problems with technical solutions. What they underestimate is the gap between technical capability and operational readiness. The EHR may be capable of supporting care coordination workflows in theory, but adapting it requires workflow analysis, configuration, testing, and training that takes months, not weeks. A system that is technically capable but operationally unprepared is not ready.

Past Change History

Was the last change well-managed? Did leadership deliver what it promised? Did the implementation go as planned, or did it go sideways in ways that staff still remember? Past change history is the experiential foundation for efficacy beliefs. If the last EHR upgrade was chaotic — if the go-live date slipped three times, if the training was inadequate, if the support desk was overwhelmed, if the workarounds staff developed were never resolved — then the workforce’s efficacy belief for the next technology-dependent initiative will be low regardless of how good the new initiative is. They are not evaluating this change. They are evaluating this organization’s capacity to manage change, based on a track record they experienced firsthand.

This connects directly to Workforce Module 4 on leadership trust. Slovic’s (1993) trust asymmetry principle applies: trust built over years of competent change management can be destroyed by a single badly managed implementation, and the destroyed trust contaminates the readiness assessment for all subsequent changes. An organization with a poor change history does not merely need a better change plan. It needs to repair the trust deficit before launching — which requires acknowledging the history, demonstrating what will be different this time, and making credible, small-scale commitments that rebuild efficacy beliefs before the full deployment begins.

Stakeholder Alignment

Do the key stakeholder groups agree on the need for the change, the approach, and the priorities? Stakeholder alignment is not consensus — it is the absence of active opposition from groups with the power to undermine implementation. In healthcare, the critical stakeholders are typically physicians, nursing leadership, IT, finance, and — for grant-funded programs — the external funder.

Misalignment between stakeholder groups produces a specific failure pattern: the initiative advances on paper while being sabotaged in practice. Physicians who are skeptical of a new care model will comply with documentation requirements while continuing to practice as before. Nurses who were not consulted during design will implement the workflow as written while quietly maintaining the old workflow as backup. IT staff who were given an unrealistic timeline will meet the deadline by cutting testing, producing a go-live that technically launches on time and functionally fails on day one. Each group’s non-alignment manifests differently, but the effect is the same: the initiative is adopted in form but not in function.


Change Fatigue: The Depleted Capacity Problem

Organizational change capacity is finite. Each change initiative — whether successful or not — consumes cognitive bandwidth, emotional energy, and institutional attention. An organization that has undergone multiple recent changes has depleted these reserves. The workforce’s capacity to absorb another change is not determined by the quality of the next initiative. It is determined by the cumulative demand of all recent initiatives relative to the organization’s recovery capacity.

Herscovitch and Meyer (2002) distinguished three forms of commitment to change: affective commitment (I want to support this change), continuance commitment (I have to support this change), and normative commitment (I ought to support this change). Change fatigue systematically degrades affective commitment — people stop wanting to support changes because they are exhausted from the last three — while continuance commitment (compliance driven by consequences) persists. The result is an organization that looks compliant but is not committed. Staff show up to the training. They complete the modules. They use the new system when someone is watching. They revert to the old workflow when nobody is watching. The adoption metrics show success. The outcomes do not.

Change fatigue has a specific cognitive mechanism that connects to Human Factors Module 2. Each new initiative imposes what cognitive load theory calls extraneous cognitive load — the mental effort required to learn new systems, adapt to new workflows, and navigate the ambiguity of the transition period. When multiple changes overlap or follow in rapid succession, this extraneous load accumulates. It competes with the intrinsic cognitive load of clinical work for the same finite working memory resources. The clinician who is simultaneously adapting to a new EHR module, a redesigned care pathway, a revised documentation standard, and a new quality reporting requirement is not resistant to change. She is cognitively overloaded. Her working memory cannot accommodate all of the new demands while maintaining performance on her primary task — patient care. Something gives. Usually, it is the newest initiative, because the newest initiative has the weakest habit formation and the lowest switching cost to abandon.

The practical implication is that change capacity must be managed as a finite resource, not assumed as a constant. Organizations should maintain a change inventory — a running catalog of active and recent initiatives that identifies the total change load on each affected role group. Before launching a new initiative, the inventory should answer: What else is this workforce absorbing right now? What did they absorb in the last six months? Is there sufficient recovery between the last change and this one? If the answer is no, the correct decision is to sequence the initiative later, not to push through and hope the workforce can handle it.


Healthcare Example: A Rural Health System and the HRSA Grant

A 75-bed rural health system has secured a 3-year, $2.4M HRSA grant to deploy a care coordination platform. The platform will enable population health management, chronic disease panel tracking, care transition workflows, and community health worker integration. The grant timeline requires platform deployment within 12 months, with outcome metrics reported quarterly thereafter.

The leadership team, energized by the grant award, begins planning a 9-month implementation. Before committing to the timeline, the VP of operations insists on a formal readiness assessment across all five dimensions. The results:

Leadership commitment: High. The CEO championed the grant application, the board approved the required match, and the CMO has agreed to serve as clinical sponsor. Leadership is visible and invested.

Workforce capacity: Low. Nursing turnover is at 22% — well above the national average. Remaining nursing staff are at approximately 90% utilization, meaning they have almost no slack in their schedules. The care coordination platform requires nurses to manage chronic disease panels, conduct post-discharge follow-up calls, and document care plans — work that will take approximately 45 minutes per patient per week. With current staffing, there is no capacity to absorb this workload without either reducing other duties or increasing hours.

Infrastructure support: Moderate. The EHR is technically capable of supporting the platform — the vendor has a care coordination module. But implementation requires workflow redesign, interface configuration with the health information exchange, training for all clinical staff, and the development of new documentation templates. The IT department has two staff members, both fully committed to existing operations. No workflow analysis has been done.

Past change history: Poor. Eighteen months ago, the system implemented an EHR upgrade that went badly. The go-live was delayed twice. Training was compressed from three weeks to one week because of staffing constraints. The support desk was overwhelmed for the first two months. Nurses developed workarounds that were never formally resolved — they still maintain parallel paper tracking sheets for medication reconciliation because the EHR workflow is unreliable. The phrases “just like last time” and “here we go again” are already circulating in response to the care coordination announcement.

Stakeholder alignment: Mixed. Physicians are broadly supportive — they see the value of care coordination and have been requesting better panel management tools. Nurses are skeptical — they view the initiative as additional work on top of an already unsustainable workload, and the EHR history has depleted their confidence that the technology will work. Community health workers are uncertain about their role. The finance team is focused on the match requirement and worried about sustainability after the grant period ends.

Composite assessment: High implementation risk. Two of the five dimensions — workforce capacity and past change history — are below threshold. Stakeholder alignment is mixed, with the group most critical to implementation (nursing) in the skeptical category.

The readiness assessment produces a decision that no project plan would have generated: delay deployment by four months. The four months are invested in the two binding constraints:

For workforce capacity: the system uses the first four months of grant funding to hire three additional RNs and two community health workers, bringing staffing to a level where clinical staff can absorb the new workflows without exceeding sustainable utilization. The grant budget is restructured to front-load staffing costs and defer technology costs.

For change history repair: the VP of operations conducts three sessions with nursing staff where she explicitly acknowledges the EHR upgrade failure — what went wrong, what the organization learned, and what will be different this time. The IT team resolves the three most-cited EHR workarounds, demonstrating that the organization is capable of fixing problems, not just creating new ones. A nursing advisory group is formed and given real authority over workflow design decisions for the care coordination platform — not a suggestion box, but a governance role with veto power over go-live readiness.

The result: when the platform deploys four months later than originally planned, it deploys into a workforce that has capacity to absorb it and a recent experience of competent change management. Nursing adoption reaches 70% within six weeks — a rate that would have been unachievable in the original timeline. The grant program meets its Year 1 milestones by Month 16 instead of Month 12, but it meets them with genuine adoption rather than performative compliance. The program is on track for sustained outcomes by Year 3. The four-month delay saved the grant.


Readiness as a Go/No-Go Gate

The HRSA example illustrates a principle that should govern every transformation initiative: readiness assessment is not a diagnostic curiosity. It is a go/no-go gate. If readiness is sufficient, proceed. If readiness is insufficient, invest in readiness-building before investing in initiative deployment.

This requires a threshold — a definition of “sufficient readiness” that can be evaluated against the assessment dimensions. The threshold need not be quantitatively precise to be operationally useful. A simple red-yellow-green rating on each dimension, with a rule that no dimension can be red at deployment, provides more decision value than most organizations currently have. The Consolidated Framework for Implementation Research (CFIR), developed by Damschroder et al. (2009), identifies inner setting readiness as one of five major domains affecting implementation outcomes and provides a structured assessment protocol that organizations can adapt.

The resistance to readiness gating is predictable: it feels like delay. Grant timelines are fixed. Board expectations are set. The CEO announced the initiative. Admitting that the organization is not ready feels like admitting failure before starting. But the alternative — deploying into an unready organization and discovering the readiness deficit through failed adoption — is not faster. It is slower, more expensive, and more damaging. A failed deployment does not just waste the implementation investment. It depletes the change capacity and trust required for the next attempt. It is a compounding loss.

Herscovitch and Meyer’s commitment typology provides the diagnostic for distinguishing real adoption from performative compliance. When readiness gating is skipped and the organization is not ready, the workforce defaults to continuance commitment — doing the minimum required to avoid consequences. The initiative appears to be adopted. Metrics show training completion rates, login frequencies, and documentation compliance. But the metrics measure form, not function. The care coordination calls are logged but perfunctory. The care plans are documented but not used. The chronic disease panels are populated but not actively managed. The grant reports look good. The patient outcomes do not change.


The Readiness Assessment Protocol

For operators who want to implement readiness assessment before their next initiative, the following protocol provides a starting framework:

Step 1: Define the affected population. Which role groups will be required to change their behavior? Not the organization as a whole — the specific people whose daily work will be different. For a care coordination platform, this is nursing, physicians, community health workers, medical assistants handling intake, and IT staff managing the platform.

Step 2: Assess each dimension for each affected group. Leadership commitment may be high for the system as a whole but invisible to the night shift. Workforce capacity may be adequate for physicians but depleted for nurses. Infrastructure may be ready for the main campus but not for the satellite clinics. Past change history may be positive for IT staff (who managed the EHR upgrade from the technical side and know it eventually worked) but negative for nursing (who experienced the chaos at the point of care). Stakeholder alignment may vary dramatically between departments. Readiness is not a system-level attribute. It is a group-level attribute, and the group with the lowest readiness is the binding constraint.

Step 3: Identify binding constraints. Which dimensions are below threshold, and for which groups? The binding constraint is the dimension that will cause implementation failure if not addressed — regardless of how strong the other dimensions are. An initiative with excellent leadership commitment, strong infrastructure, good change history, and aligned stakeholders will still fail if the workforce has no capacity to absorb it. One red dimension is enough to block the system.

Step 4: Prescribe readiness interventions. For each binding constraint, define the specific investment required to bring it to threshold. Low workforce capacity requires staffing, workload reduction, or timeline extension. Poor change history requires trust repair — acknowledged failures, demonstrated competence on smaller changes, and genuine workforce participation in planning. Low stakeholder alignment requires engagement, negotiation, or — when a powerful stakeholder is fundamentally opposed — a difficult decision about whether to proceed against opposition or redesign the initiative.

Step 5: Define the gate criteria. What must be true before deployment begins? State the criteria in advance, not after the assessment, to prevent motivated reasoning from overriding the evidence. “Nursing utilization must be below 85% and at least two of three nursing leadership groups must assess the change as feasible” is a gate criterion. “Leadership feels confident we can proceed” is not.


Integration Points

Human Factors Module 4: Framing and Loss Aversion. How a change is communicated determines whether the workforce frames it as gain or loss — and Kahneman and Tversky’s (1979) prospect theory predicts that loss-framed changes face disproportionate resistance. In organizations with poor past change history, the default frame is already loss: staff expect the change to make things worse because that is what happened last time. Readiness-building must include deliberate reframing — not through marketing, but through early, visible, tangible improvements that shift the reference point. When the nursing advisory group resolves three long-standing EHR workarounds before the new platform launches, the frame shifts from “here is another thing that will not work” to “something is actually getting fixed.” The loss-aversion machinery works in reverse when the reference point moves: now, not adopting the new platform means losing the improvements they have already experienced.

Human Factors Module 5: Resilience Engineering. Hollnagel’s (2014) resilience engineering framework distinguishes between systems designed to prevent failure (Safety-I) and systems designed to enable adaptive recovery (Safety-II). Successful change adoption requires the same adaptive capacity that resilience engineering describes. A rigid implementation plan that assumes linear adoption is a Safety-I approach to change — it works only when conditions match the plan. Real implementation always encounters unexpected conditions: a key champion leaves, a workflow assumption proves wrong, an interface fails in production. Organizations with high adaptive capacity adjust — they modify the workflow, reassign the champion role, implement a manual workaround while the interface is fixed. Organizations without adaptive capacity treat the deviation as failure and either escalate to crisis management or abandon the initiative. Readiness assessment should include an evaluation of the organization’s demonstrated adaptive capacity — not their aspirational resilience, but their track record of adjusting when plans encountered reality.

Public Finance Module 4: Milestone Execution. Grant-funded transformation programs operate under fixed timelines with milestone-based reporting. When readiness is not assessed, the grant timeline becomes the implementation driver — not readiness, not adoption quality, not sustainability. The result is programs that hit milestone dates through performative compliance: the system is deployed (milestone met), staff are trained (completion certificates filed), and patients are enrolled (names entered in the platform). But deployment without adoption is an empty milestone. Training without competence is an empty milestone. Enrollment without active care coordination is an empty milestone. The funder receives the progress report. The patients receive nothing different. Readiness gating, even when it means renegotiating the timeline with the funder, protects the grant investment by ensuring milestones represent real capability, not reporting artifacts. Most federal funders — HRSA, SAMHSA, CMS Innovation Center — will accept a well-reasoned timeline modification over a program that meets dates but not outcomes.


Product Owner Lens

What is the workforce problem? Organizations deploy transformation initiatives without assessing whether the workforce, infrastructure, and institutional trust can absorb the change — and then attribute the resulting failure to the initiative rather than the readiness deficit that determined the outcome before deployment began.

What system mechanism explains it? Weiner’s two-component model: readiness requires both change valence (belief that the change is needed) and change efficacy (belief that the organization can execute it). Readiness is further modulated by five assessment dimensions — leadership commitment, workforce capacity, infrastructure support, past change history, and stakeholder alignment — each of which can independently block implementation. Change fatigue depletes the cognitive and emotional bandwidth available for new initiatives, following the same resource depletion dynamics that drive burnout (Workforce M2) and cognitive overload (HF M2).

What intervention levers exist? Readiness assessment before deployment, with binding constraints identified and addressed before launch. Readiness-building investments: staffing for capacity, trust repair for change history, advisory governance for stakeholder alignment, infrastructure buildout for technical readiness. Change inventory management to prevent cognitive overload from concurrent initiatives. Timeline negotiation with funders when readiness requires delay.

What should software surface? A readiness dashboard that scores each assessment dimension by affected role group, with red-yellow-green indicators and trend tracking over time. A change inventory — active and recent initiatives mapped to affected role groups with cumulative change load scores. A commitment quality indicator: the gap between training completion (form) and workflow adoption (function), measured through EHR usage patterns, workflow completion rates, and outcome metrics. A change history log that records past initiative outcomes — promised versus delivered — creating an institutional memory that informs efficacy beliefs.

What metric reveals degradation earliest? The gap between stated support and behavioral adoption — the Herscovitch and Meyer commitment quality signal. When training completion is high but workflow usage is low, the organization is in continuance commitment mode: complying without committing. This is the earliest measurable signal that readiness was insufficient. Secondary indicators: change fatigue proxy (number of concurrent active initiatives per role group), workforce capacity trend (utilization rates in affected units during implementation), and stakeholder alignment shift (change in sentiment scores between announcement and go-live).


Warning Signs

These indicators suggest that readiness is insufficient for a planned or active initiative:

  • Leadership attention has shifted to other priorities within weeks of the initiative announcement — sustained commitment is absent, and the workforce has noticed
  • Frontline staff describe the initiative using language from the last failed change: “here we go again,” “another flavor of the month,” “this too shall pass” — past change history is contaminating efficacy beliefs
  • Training sessions are attended but produce no questions — a sign of disengagement, not comprehension; an engaged workforce asks difficult questions, a resigned workforce sits quietly
  • The project timeline was built backward from the grant deadline rather than forward from readiness assessment — the deadline is driving the plan, not the plan driving the timeline
  • Key stakeholder groups were informed of the initiative after design decisions were finalized — producing the specific resentment that comes from being told rather than consulted
  • Workarounds from the previous change are still in place — the organization has not resolved the last disruption, and staff reasonably doubt it can manage the next one
  • Overtime hours are increasing in the units targeted for the initiative — workforce capacity is already depleted before the new demand is added
  • The project plan includes no readiness assessment step, no readiness gate, and no contingency for what happens if the organization is not ready — treating readiness as an assumption rather than a variable
  • Staff who will be most affected by the change were not included in the readiness assessment — the organization assessed its own readiness from the top down and declared itself ready based on leadership confidence rather than workforce reality

Summary

Change readiness is not optimism, enthusiasm, or executive endorsement. It is the measurable degree to which an organization’s members are psychologically prepared and collectively capable of implementing a specific change. Weiner’s model identifies the two necessary conditions — change valence and change efficacy — and establishes that both must be present. Armenakis and Holt’s operationalization demonstrates that readiness can be assessed with psychometric rigor, not just managerial intuition.

The five assessment dimensions — leadership commitment, workforce capacity, infrastructure support, past change history, and stakeholder alignment — provide the diagnostic structure. Each can independently block implementation. The binding constraint is the dimension with the lowest score, and no amount of strength on the other dimensions compensates for a critical weakness on one. Change fatigue adds a temporal dimension: organizations that have absorbed multiple recent changes have depleted the cognitive and emotional bandwidth required for the next one, following the same resource depletion dynamics described in Workforce M2 and HF M2.

The HRSA-funded rural health system demonstrates the principle in practice: a readiness assessment that reveals insufficient workforce capacity and poor change history, producing a four-month delay that saves the grant program. The delay is not caution. It is the recognition that deploying into an unready organization would have consumed the same four months in failed adoption and damage repair — but without the staffing, trust repair, and governance structures that made adoption possible.

The operational prescription is clear: every transformation initiative should include a readiness assessment as a formal go/no-go gate. When readiness is insufficient, the investment goes to readiness-building, not initiative deployment. The Consolidated Framework for Implementation Research (Damschroder et al., 2009) provides the structural foundation. The commitment typology from Herscovitch and Meyer (2002) provides the diagnostic for distinguishing genuine adoption from performative compliance. And the change inventory — tracking cumulative change load by role group — provides the forward-looking capacity management that prevents organizations from launching the initiative that was one initiative too many.