How AI Improves Coding Accuracy in SNFs & LTC
- Dhini Nasution
- Dec 7, 2025
- 7 min read
Updated: Dec 8, 2025
From PDPM pressure to practical, safer automation

Skilled Nursing Facilities (SNFs) and long-term care (LTC) providers live and die by documentation and coding. Under the Patient Driven Payment Model (PDPM), Medicare payment is tied tightly to ICD-10 diagnosis coding, comorbidities, and functional status captured on the MDS, rather than therapy minutes. CMS
At the same time:
SNF residents carry heavy chronic disease burdens (hypertension, vascular disease, dementia, depression, GERD, and more). PMC.NCBI.NIH
Coding errors and omissions in SNFs are common enough that coding societies now publish SNF-specific error lists and training programs. AAPC
Studies continue to show undercoding of comorbidities even in hospital ICD-10 data, with authors explicitly calling for automation to help coding teams. PMC.NCBI.NIH
Layer on staffing shortages, high turnover, complex MDS rules, and year-over-year PDPM refinements, and it’s no surprise that coding intensity and accuracy are under a microscope. PubMed
This is where AI—used correctly—can help SNFs and LTC operators improve coding accuracy, reduce risk, and protect revenue, without asking already-burned-out staff to do even more manual chart review.
1. Why coding accuracy is uniquely hard in SNFs & LTC
1.1 Complex residents, complex rules
SNF and LTC residents are often the sickest, most complex patients in the Medicare population:
Common comorbid clusters include hypertension, vascular disease, dementia, arthritis, depression, and reflux. PMC.NCBI.NIH
Many have long histories across multiple hospitals and specialists, with documentation scattered across systems and formats.
Under PDPM, that complexity is monetized through:
The more clinically accurate and complete the coding, the better PDPM reflects true resident acuity—and the more sustainable your margins.
1.2 Real-world coding problems in SNFs
Industry guidance and audits highlight a familiar pattern of SNF coding issues: AAPC
Missing or nonspecific ICD-10 codes for the primary reason for the SNF stay
Incomplete capture of surgical history and prior hospital conditions that drive PDPM groups
Failure to code NTA comorbidities that are clearly documented in the record
MDS items and ICD-10 codes that don’t match the clinical chart
Research on ICD-10 validity more broadly shows that secondary conditions and comorbidities are systematically undercoded, especially when coding teams are stretched thin—prompting calls for automation and better tools. PMC.NCBI.NIH
On the other side of the spectrum, overcoding and upcoding expose facilities to recoupments, penalties, and reputational risk. Guidance for providers repeatedly warns about the financial and legal risks of both undercoding and overcoding.
1.3 PDPM and “coding intensity” pressure
Early and ongoing analyses of PDPM’s impact show that:
PDPM implementation is associated with increased coding intensity and higher Medicare expenditures per stay, even without clear changes in mortality or readmissions. PubMed
SNFs have historically shown lower coding intensity compared with hospitals, but PDPM shifted incentives sharply toward capturing more—and more specific—diagnoses. Skilled Nursing News
Regulators and researchers have flagged the need for continued monitoring to ensure PDPM incentives drive accurate—not inflated—coding. PubMed
In other words: SNF/LTC coding must now be both complete and defensible, under much closer scrutiny.
2. What “AI for coding” actually means
“AI” has become a buzzword, so let’s be specific. When we talk about AI improving coding accuracy in SNFs and LTC, we’re usually referring to a combination of:
Natural language processing (NLP): Extracts diagnoses, procedures, and clinical clues from free-text notes, discharge summaries, consults, and scanned documents.
Machine learning models: Learn patterns linking documentation, labs, meds, and prior codes to likely ICD-10 and HCC codes, then surface candidates for human review.
Workflow engines: Embed these suggestions directly into coder and clinician workflows so that chart review becomes an exercise in verification, not hunting for evidence. PMC.NCBI.NIH
Real-world implementations show that AI-assisted coding and chart review can:
Increase coding throughput (e.g., 3× more charts reviewed with similar or better accuracy)
Reduce manual chart review time by cutting evidence search in half
Improve risk score accuracy and coding quality in risk-adjusted contracts
The key is how you use these tools in a SNF/LTC context.
3. How AI actually improves coding accuracy in SNFs & LTC
3.1 Surfacing “hidden” comorbidities and PDPM drivers
AI systems can scan entire longitudinal charts, including:
Hospital discharge summaries
Specialist consults
Therapy and nursing notes
Lab results and medication lists
…to identify conditions that are documented but not coded, such as:
Chronic kidney disease stages based on labs and nephrology notes
Neurologic conditions (stroke, aphasia, dysphagia) that drive SLP and NTA components
Surgical history and complex wounds that impact PDPM classification ASHA
NLP and AI-based risk adjustment tools are already used to:
Extract additional diagnoses from unstructured documentation
Map those diagnoses to ICD-10 and HCC codes
Flag gaps where chart evidence supports a code that is missing in claims
For SNFs/LTC, the same approach can be tuned to PDPM:
Aligning AI-identified diagnoses with PDPM clinical categories and NTA comorbidities
Highlighting residents who appear under-classified relative to their documented complexity
3.2 Reducing human error and variability
Manual coding in SNFs is vulnerable to:
Fatigue and cognitive overload
Inconsistent familiarity with PDPM rules and mapping tables
Varied experience among MDS coordinators and coders
AI-assisted coding tools help by:
Auto-suggesting primary and secondary diagnoses based on chart patterns
Checking that codes used on the MDS are actually supported by the documentation
Standardizing application of PDPM mapping rules and local coding policies
NLP-driven automation has been shown to reduce missed diagnosis codes and manual errors in other settings, and vendors report improved coding accuracy when AI suggestions are systematically reviewed by coders instead of starting from a blank slate.
3.3 Making chart review faster and more focused
In many SNFs, coding accuracy hinges on one thing: whether someone has time to read the chart carefully.
Recent work on AI-assisted chart review shows that:
“Computer-assisted chart review” lets skilled human reviewers work faster and more accurately, using the AI to pre-locate relevant evidence while humans adjudicate. PMC.NCBI.NIH
In value-based contexts, AI-powered workflows have cut Annual Wellness Visit coding time while maintaining or improving HCC capture.
For SNFs and LTC, similar workflows can:
Pre-assemble evidence bundles per resident (e.g., all HF evidence, all diabetes evidence)
Present candidate codes side-by-side with evidence so coders and MDS nurses can confirm or reject with a click
Generate PDPM impact previews (how a code change affects case-mix groups) for internal QA and education.
The result is more consistent coding decisions in less time, without turning the process into a black box.
3.4 Ensuring alignment between MDS, ICD-10, and documentation
PDPM hinges on tight alignment between:
MDS diagnosis fields (e.g., I0020B, I8000)
ICD-10 codes chosen for those fields
The underlying medical record (hospital discharge, physician orders, assessments)
Coding and billing experts stress that incorrect or incomplete diagnosis capture leads to lower-paying PDPM groups and audit exposure.
AI can continuously cross-check:
Does every PDPM-relevant diagnosis on the MDS have traceable support in the chart?
Are there chart-documented conditions that should be present on the MDS but aren’t?
Are surgical and procedural histories accurately reflected, especially when they change PDPM classification? CMS
Instead of relying solely on sporadic manual audits, facilities can run daily or weekly AI checks to keep MDS–ICD-10–chart alignment tight.
3.5 Monitoring undercoding, overcoding, and intensity trends
Because PDPM and risk-based models are under scrutiny for coding intensity, facilities need to monitor not just individual codes but trends:
Are certain clinicians or units consistently undercoding comorbidities compared with documented burdens and peers? PMC.NCBI.NIH
Are there sudden shifts toward higher-paying PDPM groups without corresponding clinical changes—a signal of possible upcoding or documentation drift? PubMed
Are known risk areas (e.g., NTAs, SLP comorbidities) being coded consistently and backed by strong evidence?
AI-enabled analytics and risk adjustment platforms already provide these kinds of coding accuracy dashboards for health plans and ACOs; the same principles can be adapted to SNF/LTC operations to support internal compliance and education.
4. Design principles for safe AI coding in SNFs & LTC
AI can absolutely make coding more accurate. It can also create new failure modes if used poorly. A few non-negotiables for the SNF/LTC environment:
4.1 AI as copilot, not autopilot
Evidence and expert commentary around AI-assisted chart review converge on the same model:
Skilled human reviewers, assisted—not replaced—by AI. PMC.NCBI.NIH
For coding in SNFs/LTC, that means:
AI suggests codes and evidence; coders and clinicians confirm
MDS nurses remain accountable for final assessments
Coding staff can override suggestions and provide feedback to improve the system
4.2 Grounding in the full record, not just text snippets
Safe AI coding must be grounded in:
Structured data (labs, meds, vitals, orders)
Unstructured notes (hospital discharge, nursing, therapy)
MDS items and ICD-10 mappings
PDPM itself assumes that ICD-10 choices on the MDS reflect the full clinical picture, not just phrases in a single note. CMS
AI coding tools should mirror that assumption, not shortcut it.
4.3 Transparent “receipts” for every suggested code
Best-in-class AI risk adjustment and coding tools emphasize:
Clear evidence lists (“this diagnosis is suggested because…”), including note excerpts and data points
Simple UIs for coders to accept, modify, or reject each suggestion
Audit logs showing who accepted which codes and when
For SNFs, this transparency is essential if you ever need to:
Defend coding choices in the face of an audit
Demonstrate that PDPM changes reflect genuine acuity, not undocumented upcoding
4.4 Governance: coding, compliance, and clinical all at the table
Finally, AI coding in SNFs/LTC needs shared governance:
Coding and HIM leaders (accuracy, guidelines, audit-readiness)
Clinical leadership (appropriateness of diagnoses, PDPM ethics)
Compliance and legal (regulatory risk, vendor contracts)
IT/analytics (data integration, monitoring, drift detection)
Their job is to define guardrails:
Where AI can and cannot make suggestions
Minimum evidence thresholds for codes to be surfaced
Monitoring for error patterns, drift, or unsafe shortcuts
5. A pragmatic roadmap for SNF & LTC operators
If you’re considering AI to improve coding accuracy, you don’t have to “boil the ocean.” A realistic progression:
Baseline measurement
Quantify current denial rates, audit outcomes, and PDPM case-mix accuracy.
Sample charts to measure undercoding of common comorbidities vs documentation.
Start with AI-assisted chart review for high-value residents
Post-acute, medically complex, or high-NTA residents where coding has the biggest margin impact.
Use AI to pre-assemble evidence and suggested codes; coders validate.
Expand to continuous MDS–ICD–chart alignment checks
Daily/weekly AI runs to flag mismatches and missing PDPM drivers.
Add analytics for intensity and compliance monitoring
Track coding intensity trends, case-mix shifts, and undercoding patterns by unit or provider.
Iterate with training and feedback
Use AI findings to guide coder and MDS nurse education.
Feed coder feedback into model refinement.
Done well, AI in SNFs & LTC is not about “letting the machine code everything.” It’s about giving your clinicians and coders superpowers: the ability to see the full story in the chart, consistently, under PDPM’s pressure—without burning out.




Comments