top of page

What a RAF Score Actually Means for Your Clinic

  • Writer: Dhini Nasution
    Dhini Nasution
  • Dec 7, 2025
  • 6 min read

Updated: Dec 8, 2025

Beyond “1.0” – how risk scores reshape care, staffing, and revenue




Most clinicians know that RAF scores matter for Medicare Advantage and other value-based contracts. Fewer have a clear mental model of what a RAF score actually is and what it means for day-to-day clinical and operational decisions. 

In under 10 minutes, here’s how to think about RAF like a vital sign for your clinic—not just a finance metric. 


1. RAF in one sentence 


CMS uses risk adjustment to translate a patient’s diagnoses and demographics into a risk score that predicts their expected cost of care. 


  • A RAF score of 1.0 ≈ the average Traditional Medicare beneficiary

  • A RAF of 2.0 ≈ roughly twice the expected cost; a RAF of 0.5 ≈ half. 


Each year, CMS publishes the HCC (Hierarchical Condition Category) model and coefficients that turn ICD-10 codes plus demographics into that score. 


So your clinic’s RAF profile is not just an abstract number—it’s CMS’s best guess of how complex and costly your panel is


2. How RAF turns into dollars 


A RAF score is used differently depending on the program, but the logic is similar: 

  1. CMS estimates a baseline cost for a “1.0” beneficiary in a given county/year. 

  2. That baseline is multiplied by the patient’s RAF score (plus other factors like demographics and geography). 

  3. The resulting adjusted amount flows into: 

    1. Medicare Advantage payments to plans

    2. Benchmarks and capitation in ACO REACH and other CMMI models, 

    3. And, downstream, into shared savings, PMPMs, and bonuses for providers. 


This has three practical implications for your clinic: 

  • If your panel is sicker than average but your RAF scores are low (because conditions are under-documented), you are underpaid relative to your true workload

  • If your RAFs are high only because of aggressive coding, you may be temporarily “overpaid” and sitting in the crosshairs of audits. 

  • Over time, CMS keeps renormalizing the model so the average FFS risk score stays at 1.0, even as coding and populations shift. 


RAF is not “free money for codes.” It’s how the system decides how much resource should follow your patients.


3. What a RAF score tells you about your panel  


Think of RAF in three layers: 


3.1 Patient-level RAF 


At the individual level, RAF is a proxy for multimorbidity and expected cost

  • More serious chronic disease (HF, CKD, diabetes with complications, COPD, serious mental illness) → higher RAF. 

  • Age, disability, and dual-eligibility status also contribute. 


For a front-line clinician, this is a rough marker of complexity, not a clinical risk score like CHADS₂-VASc or MELD. But when RAF is much lower than your clinical impression, it’s a signal that documentation and coding aren’t reflecting reality


3.2 Clinic-level RAF (your “panel profile”) 


Aggregated across your panel, RAF starts to say things like: 

  • “Our average patient is 1.4× as complex as the average Medicare beneficiary.” 

  • “Our SNF panel has a much higher RAF than our community panel, which should be reflected in staffing and funding.” 


High, accurate RAF at the clinic level should correlate with: 

  • Higher actual disease burden in your population, 

  • More intensive clinical work, and 

  • Appropriately higher per-patient revenue to support that work. 


3.3 System-level RAF and policy 


From CMS and MedPAC’s perspective, RAF scores also reveal: 

  • Coding intensity—how much of the difference between MA and FFS spending is driven by documentation patterns rather than real morbidity. 

  • The need to adjust models (e.g., HCC v28) and apply coding intensity adjustments to prevent overpayment. 

That’s why RAF is under growing regulatory scrutiny, not just actuarial interest. 


4. The good, the bad, and the ugly of RAF for your clinic 


4.1 The good: when RAF works in your favor 


When used as intended, risk adjustment can: 

  • Protect clinics caring for sicker patients from financial penalty, by ensuring payments rise with disease burden. 

  • Support investment in care management, social work, pharmacy, and behavioral health for high-RAF panels. 

  • Make quality comparisons fairer, because outcomes are judged after adjusting for baseline risk. 


For SNFs, LTC, and complex PCPs, accurate RAF is often the difference between: 

  • “These patients are too complex to manage under our budget” 

  • …and “We’re paid enough to build a real chronic-care program.” 


4.2 The bad: coding intensity and warped incentives 


The same mechanism can distort behavior. Evidence from MedPAC, Health Affairs, and Annals of Internal Medicine shows that: 

  • MA plans and some provider groups have used chart reviews and home-visit programs to maximize documented diagnoses, generating tens of billions of dollars in extra payments. 

  • This “differential coding” is not always tied to true increases in morbidity and has triggered calls for model reforms and stricter auditing

  • RAF can create pressure to “find more codes,” even when the clinical value is questionable. 

Investigative journalism has illustrated the extreme end of this spectrum—e.g., one-hour home visits used primarily to harvest diagnoses that raise RAF and payments. 


4.3 The ugly: under-coding and missed complexity 


On the other side, many clinics—especially safety-net, rural, and busy primary care—face the opposite problem: 

  • Multimorbid, high-need patients 

  • Incomplete documentation and problem lists 

  • Limited coding support 


That combination leads to RAF scores that are systematically too low given true patient complexity, which: 

  • Undercuts revenue, 

  • Masks the need for staffing and infrastructure, and 

  • Makes it look like your population is “simpler” than it is. 


In other words: RAF can hurt you both when it’s abused and when it’s ignored


5. So what does a “good” RAF strategy look like? 


For a clinic, “good RAF” is not “the highest possible numbers.” It’s: 

Accurate, defensible scores that match your real clinical workload— and stand up to audit. 


Practically, that means: 

  1. Tight alignment between documentation and codes 

    1. Every chronic diagnosis that drives RAF is clearly supported by the history, exam, labs/imaging, and plan. 

  2. Annual re-assessment of chronic conditions 

    1. Conditions that are still active are evaluated and addressed; resolved problems are clarified as historical. 

  3. Focus on whole-patient complexity, not “favorite codes” 

    1. Prioritize accurate capture of major drivers of cost: multimorbidity, advanced organ disease, serious mental illness, frailty. 

  4. Using data and AI to surface real gaps, not to game the system 

    1. Clinical reasoning AI should: 

    2. Scan longitudinal records for documented-but-uncoded conditions, 

    3. Assemble evidence packets for clinician review, 

    4. Help you avoid both under-coding and unsupported diagnoses. 


It should not suggest conditions without chart evidence or bypass clinician judgment. 


  1. Understanding your RAF in context 

    1. Compare your panel’s RAF to: 

      1. Clinical measures of multimorbidity, 

      2. Hospitalization and ED rates, 

      3. Social risk factors. 

    2. Large mismatches are a sign that your documentation and risk model are out of sync

 

6. How to talk about RAF with your team (in plain language) 


When you’re explaining RAF to clinicians, nurses, or operators, you might say: 


  • “RAF is how Medicare decides how sick our population looks, and how much money follows them.” 

  • “A RAF of 1.0 is average; our patients are often more complex than that. If we don’t document that complexity clearly, we’re under-resourced.” 

  • “The goal isn’t to ‘chase codes,’ it’s to tell an honest, defensible story about our patients’ chronic disease—and let the risk model do its job.” 


When your team sees RAF this way, it becomes less about gaming the system and more about aligning dollars with reality so you can care for complex patients sustainably. 


References  

  1. Carlin CS, et al. The Mechanics of Risk Adjustment and Incentives for Coding Intensity in Medicare. Ann Intern Med. 2024. 

  2. Centers for Medicare & Medicaid Services (CMS). 2025 & 2026 Medicare Advantage and Part D Advance Notice Fact Sheets. CMS.gov

  3. CMS. Risk Adjustment Fact Sheet. Physician Feedback Program / Value-Based Payment Modifier. 2015. 

  4. CMS. Medicare Managed Care Manual – Chapter 7: Risk Adjustment. 2014. 

  5. Curto VE, et al. Coding Intensity Variation in Medicare Advantage. J Gen Intern Med. 2025. 

  6. Kronick R. Are Fewer Diagnoses Better? Assessing a Proposal to Curb Medicare Advantage Coding Intensity. Health Aff. 2025. 

  7. McWilliams JM, et al. Use of Patient Health Survey Data for Risk Adjustment to Improve Equity. Health Aff. 2025. 

  8. MedPAC. Estimating Medicare Advantage Coding Intensity and Its Effects. Report to Congress, March 2024. 

  9. Walsh J, et al. CMS-HCC Risk Score Accuracy Improves Clinical Outcomes Among Medicare Advantage Enrollees. J Clin Trials. 2017. 

  10. CMS. Risk Adjustment and Risk Stratification in Quality Measurement. MMS Hub Supplemental Material. 2023. 

  11. Garrett B, et al. Favorable Selection in Medicare Advantage? Urban Institute. 2024. 

  12. “Understanding Risk Adjustment Factor (RAF) Scores and Their Impact on Reimbursement.” Currents in Neurocritical Care. Neurocritical Care Society. 2024. 

bottom of page