Milestone Design

The Translation Layer Between Grant Objectives and Operational Work

A federally qualified health center receives a three-year HRSA behavioral health integration grant. The Notice of Award states the objective: “improve access to behavioral health services for underserved populations.” The program manager reads this and asks the only question that matters operationally: what, exactly, do we do on Monday morning?

The answer is milestones. Milestones are the translation layer between what a funder wants to achieve and what an organization actually does. The quality of that translation determines whether the grant produces real outcomes or produces reports about activity that may or may not have changed anything. A grant objective like “improve access to behavioral health services” could mean hiring two LCSWs, launching a telehealth program, reducing wait times, increasing screening rates, or all of the above. The objective does not specify. The milestones must.

This is not a formatting exercise. Milestone design is the single highest-leverage decision in grant program execution. Milestones that are too vague cannot be measured. Milestones that measure the wrong things produce perverse incentives. Milestones without defined evidence standards create programs that accomplish real work but cannot prove it at reporting time. Milestones without dependency mapping create programs that attempt parallel execution of sequential work, producing rework, delay, and budget waste. And milestones designed without reference to the budget create programs where spending and progress are disconnected — where money goes out the door without producing the operational changes the grant was funded to create.


Milestones as Translation

Grant objectives are written in the language of policy intent. They describe what the funder wants to see in the world: improved access, reduced disparities, strengthened workforce capacity, enhanced care coordination. These are real goals, but they are not operational instructions. They must be translated into the language of execution: specific actions, specific quantities, specific timelines, specific evidence.

The translation process has three layers:

Objective to outcome. “Improve access to behavioral health services” becomes “reduce average wait time for behavioral health intake from 6 weeks to 10 days” and “increase behavioral health screening rate in primary care from 15% to 70%.” This layer converts policy language into measurable end states. The measurable end states must be specific enough that two independent observers would agree on whether they had been achieved.

Outcome to milestone. “Reduce average wait time to 10 days” becomes a sequence: “Hire and credential 2 LCSWs by month 6. Establish 200 weekly appointment slots by month 9. Achieve 70% slot utilization by month 12. Achieve average wait time under 10 days by month 15.” This layer converts end states into the operational steps required to produce them. Each step must be achievable independently, and the sequence must reflect actual operational dependencies.

Milestone to evidence. “Hire and credential 2 LCSWs by month 6” requires defined evidence: signed offer letters, credentialing committee minutes, state license verification, NPI assignments, EHR access provisioning records. If the evidence standard is not defined at milestone design time, the program will discover at reporting time that the evidence does not exist — that the credentialing was completed but nobody saved the committee minutes, that the licenses were verified verbally but no documentation was retained. The evidence chain must be designed forward, not reconstructed backward.

The W.K. Kellogg Foundation Logic Model Development Guide (2004) formalizes this translation as the logic model: inputs lead to activities, activities produce outputs, outputs produce short-term outcomes, short-term outcomes produce long-term outcomes. Milestones correspond to the output and short-term outcome levels of the logic model. A program that sets milestones only at the activity level (“conduct community meetings,” “develop protocols”) has stopped the translation too early. A program that sets milestones only at the long-term outcome level (“reduce health disparities”) has set targets it cannot influence within the grant period. The milestone set must span the causal chain from activities through short-term outcomes, with enough intermediate checkpoints to detect whether the causal logic is actually working.


Milestone Design Principles: SMART Applied to Grants

The SMART framework (Specific, Measurable, Achievable, Relevant, Time-bound) is ubiquitous in grant writing guides. It is also, in its generic form, nearly useless. Saying a milestone should be “specific” does not help a program manager decide how specific. The framework becomes operationally useful only when each dimension is interpreted in the specific context of grant-funded healthcare programs, where the constraints are particular and the failure modes are predictable.

Specific enough for evidence collection. “Improve behavioral health capacity” is not specific. “Hire 2 LCSWs” is specific but incomplete — it does not specify credential requirements, practice setting, or service modality. “Hire and credential 2 LCSWs (or equivalent licensed behavioral health providers) in the primary care clinic, with active state licensure and panel credentialing, providing in-person and telehealth services” is specific enough that the evidence requirements are self-evident. The test for specificity is: could someone who was not involved in designing this milestone determine, from the milestone statement alone, exactly what evidence to collect? If the answer is no, the milestone is underspecified.

Measurable with available data. This is the constraint that separates aspiration from execution. “Achieve 70% PHQ-9 screening rate” is measurable only if the EHR is configured to capture PHQ-9 administration as a discrete, queryable data element. If screening is documented in free-text clinical notes, the data exists in theory but is not extractable in practice. Milestone design must account for the data infrastructure that actually exists, not the data infrastructure the program wishes it had. The CDC’s Framework for Program Evaluation in Public Health (1999) emphasizes feasibility as an explicit evaluation standard — the evaluation design (and by extension, the milestone measurement) must be achievable with available resources, time, and data systems.

Achievable within the grant period. HRSA behavioral health grants typically run three to five years. Milestones must be achievable within that window, accounting for startup delays. A program that budgets four months for hiring in a rural area where behavioral health recruitment averages six to nine months has set an unachievable milestone that will produce its first reporting failure before the program has delivered its first service. Achievability assessment requires honest estimation of operational timelines — not the timeline the funder wants to see, but the timeline the organization’s operational history predicts. PERT estimation (optimistic, most likely, pessimistic) from Operations Research Module 4 applies directly: the “most likely” hiring timeline for a rural LCSW is 6 months, the optimistic is 4, the pessimistic is 12. A milestone set at month 4 has roughly a 15% probability of success. A milestone set at month 9 has roughly an 85% probability. The difference between those two choices is the difference between a program that starts with a reporting success and one that starts with a reported failure.

Relevant to the stated objective. Every milestone must trace back to the grant objective through the logic model. If the objective is “improve access to behavioral health services,” a milestone of “develop a community needs assessment” is relevant only if the needs assessment informs service design in a way that demonstrably improves access. If the needs assessment is a compliance artifact that will not change any operational decision, it is not relevant — it is activity that consumes budget and time without advancing the objective. The relevance test is: if this milestone were removed, would the program be less likely to achieve its stated outcomes? If not, the milestone is consuming resources that should be deployed elsewhere.

Time-bound with interim checkpoints. A milestone due at month 12 with no interim checkpoints is a twelve-month information blackout. The program manager discovers at month 11 that the milestone will not be met and has no time to intervene. Interim checkpoints convert a single pass/fail at the deadline into a trajectory that is visible throughout the period. “Hire 2 LCSWs by month 9” becomes: “Post positions by month 2. Begin candidate interviews by month 4. Extend first offer by month 6. Complete credentialing for first hire by month 8. Both providers seeing patients by month 10.” Each checkpoint creates an opportunity to detect delay and intervene — to expand the candidate pool, increase the salary offer, engage a recruiter, or adjust the timeline before the deadline arrives.


Cosmetic vs. Substantive Milestones

The distinction between cosmetic and substantive milestones is the single most important quality judgment in milestone design. Cosmetic milestones demonstrate that activity occurred. Substantive milestones demonstrate that capability was built or outcomes changed. The distinction is not academic. Federal funders are increasingly demanding substantive milestones, and SAMHSA’s Government Performance and Results Act (GPRA) measures explicitly require outcome-level data, not activity counts.

Cosmetic milestones measure effort. They answer the question “did you do something?” Examples: conducted 3 community meetings. Hired 2 staff. Developed a referral protocol. Purchased telehealth equipment. Completed 12 training sessions. These milestones are easy to achieve and easy to document. They are also nearly meaningless as indicators of program effectiveness. An organization can conduct 3 community meetings that no one attends, hire 2 staff who leave in 6 months, develop a referral protocol that no clinician follows, purchase equipment that sits unused, and complete 12 training sessions that change no behavior. Every cosmetic milestone was “achieved.” The program produced nothing.

Substantive milestones measure capability or outcome. They answer the question “did something change?” Examples: achieved 70% PHQ-9 screening rate in primary care. Reduced average behavioral health wait time from 6 weeks to 10 days. Established bidirectional referral pathway with 80% referral completion rate. Achieved 200 weekly behavioral health appointment slots with 70% utilization. Retained both behavioral health providers through the first 18 months. These milestones are harder to achieve and harder to document. They are also the only milestones that tell you whether the program is working.

The shift from cosmetic to substantive milestones follows a predictable pattern across the grant lifecycle. Year 1 milestones are necessarily more activity-oriented because infrastructure must be built before outcomes can be measured. But even Year 1 milestones can be substantive rather than cosmetic if they measure capability rather than activity. “Hired 2 staff” is cosmetic. “2 credentialed behavioral health providers seeing patients in EHR-scheduled appointments, with telehealth capability enabled” is substantive — it confirms not just that hiring occurred but that the operational capability the hiring was supposed to produce actually exists.

Funders notice the difference. HRSA project officers reviewing semi-annual progress reports can distinguish between organizations reporting activity and organizations reporting capability. The former produce reports that read like task lists. The latter produce reports that describe what changed and what the data shows. When continuation funding decisions are made — and they are discretionary decisions, not automatic renewals — the organizations reporting substantive progress have a material advantage.


The Evidence Chain

Every milestone requires a defined evidence standard: what documentation will prove the milestone was achieved? The evidence chain must be designed at the same time as the milestone, not after the fact.

The evidence chain has three components:

Evidence type. What kind of documentation constitutes proof? For a hiring milestone: signed offer letter, credential verification, EHR access confirmation. For a screening rate milestone: EHR query results showing PHQ-9 administration count divided by eligible encounter count. For a wait time milestone: scheduling system report showing time from referral to first available appointment. The evidence type must be specific enough that the data source is unambiguous.

Collection method. How will the evidence be captured? If the evidence is an EHR report, who runs it, when, and what parameters define the query? If the evidence is a signed document, where is it stored and who is responsible for retaining it? If the evidence requires manual data collection — patient surveys, chart reviews, observation logs — who collects it, how frequently, and what quality controls apply? The CDC evaluation framework identifies data quality as a prerequisite for evaluation utility: evidence that is unreliable, incomplete, or inconsistently collected undermines the entire milestone structure.

Retention and access. Federal grant records must be retained for three years after submission of the final expenditure report (per 2 CFR 200.334), and longer if any litigation, claim, or audit is pending. Evidence documentation must be stored in a system that is accessible, searchable, and protected against loss. An organization that stores milestone evidence in individual staff email inboxes or on local hard drives has not solved the evidence problem — it has created an audit risk that will materialize when the responsible staff member leaves or the hard drive fails.

The most common evidence failure is the retrospective discovery. The program achieves a milestone — the screening rate genuinely reaches 70% — but at reporting time, nobody can prove it. The EHR query was never built. The baseline data was never captured. The screening rate was estimated from a sample rather than measured from the population. The report must either state the achievement without evidence (which the funder will question) or acknowledge that the evidence does not exist (which undermines confidence in all other reported milestones). This failure is entirely preventable. It is caused by designing milestones without simultaneously designing the evidence chain.

SAMHSA’s GPRA data collection requirements illustrate the evidence chain at the federal level. GPRA requires grantees to administer standardized intake and follow-up instruments (the GPRA Client Outcome Measures tool) and report aggregate results. Organizations that discover these requirements at the first reporting deadline — rather than building them into intake workflows from day one — face a data gap that cannot be retroactively filled. The evidence chain must be forward-designed: specified at milestone design, built into operational workflows at program launch, and tested before the first reporting period.


Dependency Mapping

Milestones have dependencies. They form a network, not a list. The dependency structure determines the critical path — the longest chain of sequential milestones that governs the minimum possible timeline for the program. Operations Research Module 4 (Critical Path Analysis) provides the formal analytical framework; here the focus is on how dependency mapping applies specifically to grant milestone design.

The dependency chain for a behavioral health integration grant follows a predictable structure:

Hire staff (months 1-9) —> Credential and onboard (months 7-11) —> Configure EHR templates and workflows (months 8-12) —> Launch services (months 10-14) —> Ramp to target volume (months 14-20) —> Measure utilization and outcomes (months 18-24) —> Demonstrate sustainability (months 24-36).

Each arrow represents a hard dependency: the downstream milestone cannot begin (or cannot meaningfully begin) until the upstream milestone is substantially complete. You cannot launch behavioral health services without credentialed providers. You cannot measure utilization without operational services. You cannot demonstrate sustainability without utilization data.

The critical insight is that delays propagate forward through the dependency chain. If hiring takes 9 months instead of 6, credentialing starts 3 months late. Credentialing delay pushes service launch, which pushes utilization ramp, which pushes outcome measurement. A 3-month hiring delay at the beginning of a 36-month grant becomes a 3-month outcome measurement delay at the end — and if the grant period is fixed, those 3 months may eliminate the ability to demonstrate outcomes before closeout.

Dependency mapping also reveals parallel paths — milestones that can proceed simultaneously because they do not depend on each other. While recruitment is underway, the program can simultaneously develop clinical protocols, configure EHR templates (in a test environment), establish referral agreements with community partners, and develop the data collection infrastructure. These parallel activities have float — they can absorb delays without affecting the critical path. But they only have float if someone has identified them as parallel. Without explicit dependency mapping, program managers treat all milestones as equally urgent, distributing effort evenly rather than concentrating it on the critical path.

The practical tool is simple: a milestone dependency diagram (network diagram) that shows which milestones depend on which others, with estimated durations and explicit identification of the critical path. This diagram should be created during grant application (it informs the workplan and timeline sections of the proposal) and updated quarterly during execution. When a critical-path milestone slips, the diagram immediately shows which downstream milestones are affected and by how much — converting a vague sense that “we’re behind” into a precise understanding of what is at risk and what intervention is needed.


Healthcare Example: Three-Year HRSA Behavioral Health Integration Grant

Consider a rural health system — a 25-bed critical access hospital with three primary care clinics, serving a county of 28,000 in the Pacific Northwest — that receives a 3-year, $2.4M HRSA behavioral health integration grant. The following milestone set illustrates both the cosmetic and substantive approaches to the same program objectives.

Year 1: Infrastructure (Months 1-12)

ObjectiveCosmetic MilestoneSubstantive Milestone
StaffingHired 2 behavioral health providers2 licensed behavioral health providers (LCSW or LMHC) credentialed, paneled, and seeing patients in EHR-scheduled appointments by month 9; 3rd provider (psychiatric prescriber) recruited with signed LOI by month 12
EHR readinessUpdated EHR to support behavioral healthBH intake template, PHQ-9/GAD-7 screening flowsheets, warm-handoff workflow, and bidirectional referral tracking configured in EHR; validated in test environment; deployed to all 3 clinic sites by month 8
TrainingConducted training for clinical staff85% of primary care providers and 90% of nursing staff completed 4-hour collaborative care training (documented attendance + post-test score >= 80%); refresher training scheduled quarterly
Referral pathwaysDeveloped referral protocolsWritten referral agreements signed with 3 community BH agencies; bidirectional referral tracking active in EHR; baseline referral completion rate measured and documented
Data infrastructureEstablished data collection processesGPRA intake instrument integrated into BH intake workflow; baseline screening rate, wait time, and referral completion rate captured for months 1-6 to establish pre-intervention benchmarks

The cosmetic column would pass a cursory compliance review. The substantive column tells the funder — and the program manager — whether the program is actually ready to deliver services in Year 2. The difference is not wordsmithing. It is the difference between measuring activity and measuring operational capability.

Year 2: Implementation (Months 13-24)

ObjectiveCosmetic MilestoneSubstantive Milestone
Service volumeProvided behavioral health services200 weekly BH appointment slots established across 3 sites; 70% slot utilization achieved by month 18; 85% utilization by month 24
ScreeningImplemented screening protocolPHQ-9 screening rate in primary care reaches 50% by month 15, 70% by month 21; GAD-7 screening rate reaches 40% by month 18
AccessImproved access to behavioral healthAverage wait time for BH intake reduced from baseline of 42 days to 21 days by month 18, 10 days by month 24; measured monthly from scheduling system data
Care integrationIntegrated behavioral health into primary careWarm-handoff rate (same-day BH contact after positive screen) reaches 60% by month 18; documented in EHR workflow data
Data collectionCollected program dataGPRA follow-up rate at 6 months post-intake reaches 70%; data quality audit completed at month 18 with <5% missing critical fields

Year 2 substantive milestones are quantitative, time-bound, and tied to specific data sources. They create a trajectory that is visible quarterly. If utilization is at 40% at month 15 when the target is 50%, the program manager knows the ramp is behind schedule and can intervene — add appointment slots, extend clinic hours, increase referral outreach — before the month 18 checkpoint arrives.

Year 3: Optimization and Sustainability (Months 25-36)

ObjectiveCosmetic MilestoneSubstantive Milestone
OutcomesDemonstrated improved outcomes30% reduction in ED utilization for BH-related visits among enrolled patients (measured by claims data comparison, baseline vs. months 25-36); PHQ-9 score improvement of >= 5 points for 50% of patients with 6-month follow-up
SustainabilityDeveloped sustainability planRevenue model documented: BH services generating >= $X in billable revenue per month (target: 65% of BH provider salary costs covered by clinical revenue by month 30); payer contracts negotiated for BH services; board resolution committing operational funds to BH positions post-grant
Workforce stabilityMaintained program staffing80% retention of grant-funded BH staff through month 36; if turnover occurs, replacement hired within 90 days
Community impactEngaged community stakeholdersCommunity BH needs re-assessment completed at month 30; results compared to Year 1 baseline; findings presented to county health board with documented attendance and action items
Knowledge transferShared program learningsReplicable program model documented (workflow specifications, training curriculum, EHR configuration guide); presented at regional or national conference; available for peer organizations

Year 3 is where the cosmetic-substantive distinction matters most. A cosmetic “developed sustainability plan” milestone can be achieved by writing a document. A substantive sustainability milestone requires demonstrating that the program generates revenue, has payer contracts, and has a board commitment to continued funding. The funder reads the Year 3 report knowing that the grant is ending. They want evidence that the investment will persist — not a plan for persistence, but operational indicators of it.


The Milestone-Budget Linkage

Milestones and budget should be co-designed. This is not a best practice suggestion. It is an operational necessity. A milestone without a budget implication is a plan without resources. A budget line without a milestone connection is spending without purpose.

The linkage operates in both directions:

Milestone to budget: what does this milestone cost to achieve? Hiring and credentialing 2 LCSWs requires: recruitment costs ($5K-15K if using a recruiter), salary and benefits ($85K-$120K per provider per year in a rural market), credentialing costs (state license, NPI, panel applications), EHR access and configuration ($2K-5K per provider), office space and equipment ($5K-10K setup). A milestone set without this cost analysis is a promise without a price tag. When the budget is exhausted before the milestone is achieved, the program faces a choice between requesting a budget modification (which takes 30-90 days and is not guaranteed) and abandoning the milestone.

Budget to milestone: what does this spending produce? A $180K telehealth platform contract should connect to specific milestones: platform operational by month 8, 50 weekly telehealth appointments available by month 12, 70% of telehealth slots utilized by month 18. If the $180K is spent and no milestone tracks what it produced, the budget report shows an expenditure and the progress report shows an activity, but nothing connects the two. The funder cannot determine whether the $180K produced value.

The milestone-budget linkage also enables burn rate analysis at the milestone level. If a program has spent 60% of its budget but achieved only 30% of its milestones, the spending-to-progress ratio signals trouble: either the remaining milestones are underfunded, or money is being spent on activities that do not advance milestones. This ratio — budget consumed versus milestones achieved — is the single most diagnostic metric for grant program health. It should be calculated quarterly and reported to program leadership.

Under 2 CFR 200.301, federal agencies are required to relate financial data to performance accomplishments. This is not aspirational guidance. It is a regulatory requirement that the milestone-budget linkage directly serves. An organization that can demonstrate, for each dollar spent, which milestone it advanced and what evidence supports achievement, is in the strongest possible position for continuation funding, audit, and funder confidence.


The Product Owner Lens

What is the funding/compliance/execution problem? Milestones are designed as compliance artifacts rather than operational tools, producing programs that report activity without measuring capability, spend budgets without tracking progress, and discover evidence gaps at reporting time rather than at design time.

What mechanism explains the operational bottleneck? The translation from grant objectives to operational milestones requires three simultaneous design decisions — outcome specificity, evidence definition, and dependency mapping — that are typically made by different people at different times, if they are made at all. The separation of milestone design from budget design disconnects spending from progress, making it impossible to detect whether the program is converting resources into results until it is too late to intervene.

What controls or workflows improve it? Co-design milestones with evidence standards and budget linkages at application time, not post-award. Build interim checkpoints into every milestone that spans more than one quarter. Create a milestone dependency diagram and update it quarterly. Define evidence collection methods and assign evidence custodians before the program launches.

What should software surface? Milestone-to-budget ratio (percentage of budget consumed versus percentage of milestones achieved), with alerts when the ratio exceeds 1.5:1. Evidence collection status for each milestone (defined / collection active / collected / verified). Dependency chain visualization showing critical path and current status of each dependency. Interim checkpoint dashboard showing trajectory toward each milestone, not just current status. Days until next reporting deadline with milestone evidence completeness score.

What metric reveals risk earliest? The evidence readiness score at the 25% mark of each milestone period. For each active milestone, what percentage of required evidence types have been defined, what percentage have active collection methods, and what percentage have at least one data point captured? A milestone at 25% of its period with 0% evidence readiness is a milestone that will fail at reporting time regardless of whether the underlying work is progressing. This metric is computable from a milestone tracking system on day one and predicts reporting failures months before they occur.


Warning Signs

Milestones read like a task list. If every milestone starts with a verb — “conduct,” “develop,” “establish,” “complete” — and none include a quantity, a threshold, or a measurable change, the milestone set is cosmetic. It will produce reports about activity without evidence of impact.

No evidence standard is defined. If the program manager cannot name, for each milestone, what specific documentation will prove achievement, the evidence chain does not exist. The program will discover this gap at the first reporting deadline.

Milestones have no interim checkpoints. A 12-month milestone with no quarterly checkpoints is a 12-month information blackout. The program cannot detect deviation until it is too late to correct course.

The milestone set and the budget were designed by different people. When the program director designs milestones and the grants office designs the budget independently, the spending plan and the work plan are disconnected. Budget lines will not map to milestones, and milestone costs will not be reflected in the budget.

All milestones have the same deadline. If the workplan shows all Year 1 milestones due at month 12, the dependency structure has not been mapped. Some milestones must be completed before others can begin. A flat deadline implies parallel execution of what are actually sequential dependencies.

The organization has never measured what the milestone requires. A milestone of “achieve 70% screening rate” in an organization that has never measured its screening rate has no baseline, no data infrastructure, and no evidence that the measurement is feasible. The milestone may be the right target, but the measurement capability must be built first — and that capability-building should itself be a milestone.


Integration Hooks

Operations Research Module 4 (Critical Path Analysis). Milestone dependency mapping is critical path analysis applied to grant programs. The formal CPM/PERT framework from OR M4 applies directly: milestones are nodes, dependencies are arcs, durations are estimated (with uncertainty, making this a PERT problem), and the critical path identifies which milestones govern the overall timeline. A program manager who has constructed a CPM network for their milestone set knows which milestones have float (and can tolerate delay) and which are on the critical path (and will propagate delay to every downstream milestone). Without this analysis, the program manager allocates attention based on visibility or urgency rather than structural importance — the failure pattern described in OR M4 where the most visible workstream gets the most attention while the critical-path workstream quietly slips.

Workforce Module 7 (Change Readiness). Grant milestones frequently assume organizational readiness that does not exist. A milestone of “achieve 70% PHQ-9 screening rate by month 15” assumes that primary care providers are willing and able to integrate behavioral health screening into their workflows. If change readiness — the combination of change valence (“we believe this is important”) and change efficacy (“we believe we can do this”) described in WF M7 — is low, the screening rate milestone will fail regardless of how well the EHR is configured or how many training sessions are conducted. Milestone design must account for readiness: if readiness assessment reveals low efficacy, the milestone set should include readiness-building milestones (pilot testing, early wins, leadership engagement) before the performance milestones that depend on workforce adoption. Setting performance milestones in an unready organization is not ambitious. It is planning for failure.


Key Frameworks and References

  • W.K. Kellogg Foundation Logic Model Development Guide (2004) — the standard reference for translating program objectives into the inputs-activities-outputs-outcomes causal chain; milestone design operationalizes the logic model
  • CDC Framework for Program Evaluation in Public Health (1999) — establishes utility, feasibility, propriety, and accuracy as evaluation standards; milestone measurement must meet the feasibility standard
  • SAMHSA GPRA (Government Performance and Results Act) Measures — federal performance measurement requirements for SAMHSA grantees; defines standardized outcome measures and data collection instruments
  • HRSA Performance Improvement and Measurement System (PIMS) — HRSA’s performance management framework requiring grantees to report on program-specific performance measures
  • 2 CFR 200.301 — requires federal agencies to relate financial data to performance accomplishments; the regulatory basis for milestone-budget linkage
  • 2 CFR 200.334 — record retention requirements: three years after final expenditure report, longer if audit pending
  • 2 CFR 200.328 — monitoring and reporting program performance; requires performance reports at intervals specified in the terms and conditions
  • PERT (Program Evaluation and Review Technique) — three-point estimation method (optimistic, most likely, pessimistic) for scheduling under uncertainty; applies to milestone timeline estimation
  • CPM (Critical Path Method) — identifies the longest dependent chain of tasks; applies to milestone dependency mapping
  • Weiner, B.J. (2009) — theory of organizational readiness for change; change valence and change efficacy as prerequisites for implementation success