The prescription was approved.
It's been refilling ever since.
And no one's been watching.
Until now.
Three independent populations. 65,234 patients. Four therapy categories.
Here's what we found:
40% of reviewed members were confirmed clinically appropriate: a positive clinical quality statement no PBM report, PA log, or claims analysis has ever produced. That documentation is the product.
Expensive prescriptions keep refilling. Costs keep climbing. Nobody qualified and independent is checking whether those therapies are still the right call. We check. We document what we find. You hold the evidence and decide what to do with it. No denials. No intervention. Just the answer to a question no one else is asking.
For health plans, self-funded employers, and the brokers who advise them.
Your organization pays thousands per person per year for GLP-1s, biologics, and other high-cost chronic therapy. Once someone starts, the prescription refills. Month after month, automatically, through a system designed to approve initiation and then step aside. The PA may renew once a year, but that checks whether the patient still qualifies, not whether the therapy is still working. Between those renewals, no independent reviewer is looking.
Nobody produces a governance finding for that population. Cadence does. And stands behind it as the governing body.
The same structural position NCQA occupies for health plan accreditation (independent body, published standard, credentialing function), Cadence occupies for continuation governance.
We run a structured review under a published methodology, document what we find, and hand it back. The artifact is yours. Present it to your leadership, hand it to your stop-loss carrier, or hold it. Your call. An internal team that found the same things would be obligated to act on them. You're not. That's the design.
You receive intelligence, not an obligation.
Conducted under a published standard (CGS v1.1). Not software. No integration. No PHI. Advisory only. See the credential →
The test: Ask your VP of Medical Management to show you the document that proves your plan systematically reassessed expensive ongoing therapy continuation across a defined population last quarter. They can show you what you spent, what you denied, and what's on formulary. They cannot show you what you did about the patients who stayed on therapy month after month with no structured touchpoint. That document doesn't exist. Because the process that produces it has never been built.
© 2026 Cadence, LLC — Continuation Governance Intelligence™
How ready is your organization?
Most organizations govern drug initiation with PA, step therapy, and formulary controls. Continuation gets the same tools repurposed, none of which were designed to assess whether ongoing therapy is still appropriate. The GRS measures where your organization falls on that spectrum: what reassessment processes exist, what governance documentation you can produce, and where the structural gaps are.
Twelve questions. Two minutes. At the end, you'll see your score, what it means, and a specific path forward.
Everyone is governing their piece.
Nobody is governing the whole picture.
Your PA team approves access. Your PBM manages formulary. Your providers treat. Each one does their job well. But between those handoffs, prescriptions keep refilling with no structured reassessment, no documented governance finding, and no artifact your leadership can hold. The gap isn't a failure of any one team. It's a structural condition.
Three facets of the same gap
The PA renewal checks eligibility: does this patient still qualify? It does not ask whether the therapy is still producing outcomes, whether the dose is still appropriate, or whether the clinical picture has changed since initiation. Between renewals, refills process automatically. A patient may continue for years with no touchpoint beyond a periodic gate that asks nothing about the state of their care.
Claims show utilization. Labs show response. Notes show rationale. Pharmacy data shows fill patterns. Each system holds a piece of the picture. No system assembles them into a structured reassessment. The intelligence exists. It's never been pulled together.
Step therapy, PA renewal, UM referral: access and eligibility tools. The PA renewal is binary (approved/denied). UM referral is intervention-grade. Neither produces an influence metric or an artifact showing continuation was assessed at the population level. They were designed for drug selection and access control. No one redesigned them for recurring continuation governance.
Where Cadence Sits
Every tool in your current stack was designed for something else.
| Capability | Claims Analytics | PA / UM | PBM Reporting | Internal Review | Cadence |
|---|---|---|---|---|---|
| Governs continuation | · · · | · · · | · · · | · · · | ✓ |
| Produces a governance artifact | · · · | · · · | · · · | · · · | ✓ |
| Advisory-only (no duty to act) | ✓ | · · · | ✓ | · · · | ✓ |
| Measured influence metric | · · · | · · · | · · · | · · · | ✓ |
| Audit trail | · · · | · · · | · · · | · · · | ✓ |
5 for 5. Claims analytics sees the population. PA/UM acts on individuals. PBMs report spend. Internal review triggers the duty to act. Cadence is the only column that governs continuation, produces an artifact, measures influence, and maintains a sealed review record. All without compulsion.
Financial auditing works because the auditor doesn't work for the CFO.
Continuation governance works the same way.
What we find when we look
Members continuing on therapy not because a clinician confirmed it was appropriate, but because the prescription was written, the refill was authorized, and nobody circled back. It could be waste. It could be entirely appropriate. The point is that no one has checked. That's continuation inertia: what happens when a system governs initiation with rigor and then steps aside.
40% of reviewed cases were confirmed appropriate. The right therapy, at the right dose, continuing for the right reasons. A human reviewer confirmed each one. The other 60% had grounds for a trajectory change that would have gone unexamined indefinitely. You can handle what you're aware of. It's the spend no one's examining that compounds.
The structural question is how to make continuation visible without triggering the compliance machinery that comes with finding something. This is why no one has built it internally. The moment an internal medical director looks at a member who's been on semaglutide for 18 months with no measurable response, they can't just document it. They're a covered entity. Fiduciary obligations, licensure standards, regulatory exposure. Their governance review just became a utilization management action: P&T, legal, provider relations, member grievance protocols. The duty to act is why every plan that considered building this stopped before starting. You can't govern what you're compelled to intervene on.
An external, advisory-only layer doesn't carry that obligation. It documents, measures, produces an artifact. The plan decides what to do with it through existing channels, if anything. That structural separation is what makes the governance possible. No internal team can certify its own work under the standard. The credential requires an independent certifier.
The published literature sees the same thing. A systematic review of 53 studies found median time to treatment intensification exceeds 12 months. Clinical inertia prevalence in the U.S. ranges from 35–86% across published diabetes studies. Adalimumab biosimilar adoption remains in the low single digits nationally despite widespread availability. Published therapeutic inertia research independently describes a gap of the same magnitude we measured. See the full evidence synthesis →
Run a structured reassessment on any continuation population.
The same pattern appears.
When a payer flags fraud, waste, or abuse, the duty to act kicks in. Legal exposure, compliance triggers, political fallout. That compulsion is why internal governance programs collapse before they launch. Cadence doesn't look for fraud. It looks for the absence of structured reassessment, documents what it finds, and compels no one to intervene. You see the shape. You decide what to do with it.
When you run a structured reassessment on a specialty continuation population, the outcomes sort into four buckets: Continue (confirmed appropriate), Adjust (dose modification indicated), Taper (step-down indicated), Switch (alternative indicated). The distribution across those four buckets is the shape.
The shape exists because continuation populations are a mix. Some patients are on exactly the right therapy at the right dose and should stay there. That's the DAR. Others have drifted: a dose never adjusted after stabilization, lab monitoring that lapsed, a biosimilar that became available, a clinical context that changed. It went unexamined because the system never scheduled the examination. The shape is what appears when someone finally conducts one.
The shape was consistent across two completely independent cohorts: different payer type (commercial vs. self-funded employer), different population mix (GLP-1 only vs. four therapy categories), different sizes (25,000 vs. 9,500), and the RIR landed within 2 percentage points both times: 60% and 58%. The outcome distributions were close but not identical (40/25/20/15 vs. 42/23/21/14). Close enough to confirm the structural pattern, different enough to prove these are independent measurements. Across four therapy classes in Cohort 2, the RIR ranged from 54% (behavioral) to 61% (biologics). The variation is real. The pattern is consistent.
In April 2026, the same trigger methodology was applied to 30,734 patients in the NIH All of Us Research Program. Different data source (EHR, not claims), different operator (computational, not human), nationally-recruited diverse population. The flag rate landed at 29.1%. The shape held.
The shape is not a GLP-1 phenomenon. It is a structural property of ungoverned continuation populations. It appears wherever the reassessment was never scheduled.
"We found $41M in avoidable spend."
It triggers legal review, implicates providers, requires action, and creates adversarial dynamics with your network. Gets killed in committee by General Counsel.
"We assessed 25,000 continuation members through a structured governance cycle. Here is the artifact, here is the measured influence rate, and here is what changed."
Produces a documented artifact, confirms appropriate continuation, and creates optionality. Gets approved because it documents without compelling, and proves neutrality.
What makes this different
When a board member asks "what is your continuation oversight posture?" the plan has a documented answer. Not "we found problems," but "we assessed our continuation population, here is the measured governance signal, and here is what the cycle produced."
If RIR shows 60% trajectory influence, that is information, not a compliance trigger. The organization can use it to inform strategy without the legal and political machinery that activates when something is labeled fraud or waste.
The 40% DAR is a clinical finding with no precedent in health plan reporting. A plan that can say "we reviewed 2,300 members and a qualified clinical reviewer confirmed each one is on the right therapy at the right dose" holds documentation nothing in the PBM, PA, or claims stack can replicate. It's the kind of finding Star Ratings and HEDIS were designed to reward but no current process produces. The RIR gets attention because it's the dramatic number. The DAR is the one that proves the instrument is neutral.
This is what the shape looks like:
Four outcomes. One bar. The shape of continuation governance made visible.
Cohort 1: commercial health plan, GLP-1 agonists, 90-day governance cycle
The Seven Non-Negotiables
These constraints are the trust architecture. They make deployment possible where every other approach gets killed in committee.
Without governance, a member continuing for 18 months eventually hits a PA wall and receives a denial out of nowhere. With Cadence, the governance touchpoint happens proactively: the provider gets an advisory signal, the member gets a reassessment conversation instead of a denial letter. Structured governance doesn't restrict access. It replaces surprises with conversations.
Seven steps from a claims file to a governance artifact.
One CSV goes in. Ninety days later, a documented governance record comes out. Every step between is structured, auditable, and advisory-only. Click any node to see what happens at that stage.
The governance cycle runs on data the payer or employer already has: claims, pharmacy benefit, and clinical feeds they already provide to analytics vendors. Cadence is the first process that assembles it into a structured governance input. Governance is pattern recognition, not diagnosis. Four fields are sufficient.
Three metrics nobody had measured.
Two cycles that proved the pattern.
RIR, DAR, and GPR were measured across 34,500 members in two governance cycles with human clinical review. The governance gap those metrics describe has since been independently validated in a 30,734-patient NIH national cohort. RIR measures the governance signal. DAR proves the instrument is neutral. GPR shows whether the signal persists without compulsion.
Governance at as little as 1% of therapy cost. The $8.6M is TAF-weighted by outcome type. The artifact documents it. Your leadership determines the response.
The percentage of reviewed cases where structured reassessment identified grounds for a trajectory change. Correlational, not causal. The governance value is in the documentation, not the attribution.
Declining RIR across cycles is not diminishing returns. It's proof the governance is working. If Cycle 1 produces 60% RIR and Cycle 3 produces 45%, the cases that needed trajectory change were caught. Rising DAR across cycles confirms it. That accumulated data under the standard is the dataset no one else has.
The percentage of reviewed cases where a qualified clinical reviewer confirmed continuation is clinically appropriate at the current dose for this patient at this time. RIR + DAR = 100%, always. The 40% is not a residual. It is independent clinical verification, member by member, that continuation is warranted. No utilization report, formulary analysis, or claims summary generates this documentation.
The RIR captures attention. The DAR is what your CMO will cite when the governance committee asks whether the instrument is neutral. NCQA's Medication Therapy Management measures, HEDIS SUPD, and CMS Star Ratings are all moving toward exactly this kind of evidence. The artifact produces the quality documentation these frameworks increasingly reward.
Did the trajectory change persist into the next governance cycle? Measured across two cycles.
Directional economic signal. First cycle: $8.6M. The economic shape of governance, now documented.
First Governance Cycle
The framework was developed through direct operational experience governing continuation populations and validated across 65,234 patients in three cohorts. Here is what structured governance produces.
Second Governance Cycle — Employer Cohort
The first cycle proved the shape on a 25,000-member commercial payer. To validate whether the governance signal was population-specific or structural, we ran a second independent cycle on a 9,500-member self-funded employer: different payer type, different population mix, same standard.
Five altitudes of governance maturity.
Most organizations haven't left base camp.
Every organization falls somewhere on this mountain. The question is whether you know where, and whether you've chosen to be there or just ended up there by default. Click any camp to see what governance looks like at that altitude, what you can show your leadership, and what's missing.
Most large payers self-assess at Level 2 or Level 3. Level 2 is understandable: analytics without action is a known limitation. Level 3 is uncomfortable. It reveals that escalation, the tool you rely on most, was never designed for continuation governance. PA renewal, step therapy, UM referral. They control access. They don't assess whether ongoing therapy is still the right call.
The gap between Level 3 and Level 4 is the entire Cadence thesis. Level 4 is where a structured, recurring, documented governance cycle exists. Where a measured influence rate is produced. Where an artifact your leadership can hold comes out the other end. That's a different altitude. The mountain shows you how far the climb is.
That discomfort is where the conversation begins.
Haven't taken the GRS yet? Your score places you on the mountain. Take the assessment →
If we can't survive these, we don't deserve your time.
Every objection we've heard. Answered directly. With the falsification criteria that would prove us wrong.
"This is just utilization management with a nicer name."
UM decides whether to approve or deny coverage. Cadence produces four outcomes: Continue, Adjust, Taper, Switch. None of which are authorization decisions. There is no denial pathway in the system. The output is an advisory governance signal, not a clinical decision or an authorization action. UM controls access. Cadence documents the clinical state of continuation under the standard and issues a governance credential. No UM program does either. The structural difference is the entire point.
What would falsify us: Evidence that a UM program produces a versioned governance artifact with an influence rate, audit trail, and governance parameters, without triggering a denial pathway. We haven't found one.
"Our internal clinical team could build this."
The moment an internal medical director identifies a member on 18 months of semaglutide with no measurable response, they can't just document it. They have fiduciary obligations, licensure standards, and regulatory exposure. Their governance review becomes a utilization management action, which triggers P&T, legal, provider relations, and member grievance protocols. That's the duty to act. An external, advisory layer doesn't carry that obligation. It documents and produces an artifact. The plan decides what to do with it. And even if an internal team could navigate the duty-to-act problem, they cannot certify their own governance under the standard. The credential requires independence.
What would falsify us: A health plan that has deployed structured continuation governance internally: at population scale, with an influence metric, without it collapsing into their UM machinery. We've looked. It doesn't exist.
"The artifact creates legal exposure. Now we know and have to act."
The artifact documents population-level governance patterns, not individual clinical directives. No member-level recommendation targets a specific patient's coverage. The four outcomes are advisory observations by a reviewer with no clinical authority over the member's care. And the artifact explicitly creates optionality, not obligation. The plan can route signals through existing channels, inform formulary strategy, present to the board, or do nothing. All documented. The compulsion triggers when an internal team with fiduciary obligations identifies individual-level clinical concerns. Cadence is external, advisory, and population-level.
What would falsify us: A legal opinion from an ERISA or health plan compliance attorney that an external, advisory-only, population-level governance artifact creates constructive knowledge compelling intervention. We recommend every plan have General Counsel review the pilot agreement, and we've designed it to survive that review.
"What stops a plan from weaponizing this data to deny care?"
The 40% DAR is documented in the same artifact. Any plan that acts on the 60% while ignoring the 40% has created an auditable record of selective enforcement. The case-level documentation shows every outcome including confirmations of appropriate continuation. It cuts both ways. And the entire value proposition depends on the advisory-only positioning surviving contact with legal, compliance, and provider relations. A plan that weaponizes the data destroys the political architecture that made deployment possible. Neutrality is the structural reason this product can exist.
What would falsify us: Evidence that a payer used the deliverable data to initiate coverage denials. If that happened, we would terminate the engagement. The product cannot survive without neutrality.
"Why would I pay for something that might just confirm continuation is appropriate?"
Because that confirmation is the product. If 100% of your reviewed population is continuing appropriately, the cycle output says so. Documented and measured. That is the answer your board, your actuary, and your stop-loss underwriter have never had. The 40% confirmed appropriate in the first cycle wasn't a failure. It was proof the system is neutral. You're not paying for change. You're paying for visibility.
What would falsify us: Evidence that documented governance produces no downstream value. That boards don't use it, actuaries ignore it, and stop-loss underwriters don't care.
"What would prove this entire thesis wrong?"
Show us a major payer or PBM that has deployed structured, recurring continuation governance with a measured influence metric and auditable configuration at population scale. We've searched NCQA databases, URAC standards, AMCP proceedings, and PBM product catalogs. Nothing fits. Show us an industry standard that mandates periodic continuation reassessment with documented outcomes; not PA renewal, actual governance with a measured cycle. None exists. Show us a technology platform already occupying the layer between claims analytics and UM, producing the governance documents. Cotiviti, Waystar, and Zelis are closest. None produce what we produce. And none operate under the standard or issue a governance credential. The standard already exists. The certification authority already exists. A competitor who builds the service still can’t certify their own output. And the most likely incumbents, PBMs, face a structural conflict: they profit from the rebates and volume on the very therapies being governed. Independence isn’t a positioning choice. It’s a prerequisite.
If any of these three conditions are met, our thesis is wrong. We publish the falsification criteria because we've looked and they don't exist. If you've found one we missed, we want to know.
"What if our RIR is lower than 60%?"
A 30% RIR means 70% of your reviewed population was confirmed clinically appropriate by an independent external reviewer. That's a clean governance finding, like a clean financial audit. The document doesn't lose value because it found less to change. It's documented governance under the standard, with a full audit trail. A 75% RIR is intelligence. A 30% RIR is validation. Both are governance. Both produce an artifact no one else can.
What would falsify us: Evidence that a documented, sealed governance record has no organizational value at any RIR. That boards, underwriters, and regulators are indifferent to documented continuation oversight regardless of what it finds. We have not encountered this.
Challenge us. Then decide whether the answers hold.
Change the assumptions.
The math still works.
Move the sliders. Adjust the cohort, the cost, the influence rate. The governance economics update in real time.
The percentage of reviewed continuation cases where structured reassessment correlated with a trajectory change: Adjust, Taper, or Switch. The first metric designed specifically to measure whether a governance cycle is producing a measurable signal across a continuation population.
Whether structured reassessment correlates with clinical trajectory change, not whether it caused it. Correlational.
No existing metric measures continuation governance. PA renewal tracks eligibility. UM tracks intervention. RIR tracks whether anyone looked, and what they found.
RIR + DAR (Documented Appropriateness Rate) = 100%, always. A healthy governance cycle produces both trajectory changes and confirmed appropriate continuation. The 40% DAR proves the system isn't rigged to force change.
Every flagged case gets five governance questions. Not clinical questions, but governance questions, and the answers determine the outcome.
If all five answers confirm active governance: Continue. If one or more reveals no one has looked: that's where Adjust, Taper, and Switch emerge; not because the therapy is wrong, but because no organization has confirmed it's right.
Your Cohort Size
Set your population size. The numbers flow through: cohort → 25% flagging → 92% completion → review base for outcomes.
25% = proportion meeting configured trigger thresholds (duration, dose, lab gaps). 92% = reviewer completion rate measured in first governance cycle. Both are adjustable in deployment; these are starting benchmarks.
Adjust the Outcome Distribution
Drag the sliders to model different scenarios. The formula updates in real time. Defaults reflect first governance cycle data.
Governance Signal Value
= (1,438 × $7,242 × 0.25) + (1,150 × $7,242 × 0.50) + (862 × $7,242 × 0.30)
TAF-weighted by outcome type. ATC = $7,242. Money made visible.
Neutrality Check
≥35% indicates the system is not engineered to force change
Complement: RIR + DAR = 100.0% always.
✓ Neutrality confirmed. DAR of 40% confirms the governance cycle is not engineered to maximize trajectory change. The system identifies both cases that should change and cases that should not.
You've modeled the signal. Ready to see what your actual population produces?
Model your population.
See what nobody has measured.
What does governance cost for your population, and what does it surface? Pick a therapy class. Drag the sliders. The math is live.
The bar is small. That's the point. Governance costs a fraction of the spend it oversees.
The economics above show what governance costs. This shows the size of what is currently invisible.
Organizations don't pay for the artifact. They pay for the answer to the question their board will eventually ask: "What is your continuation oversight posture?" The artifact is the answer. The question is coming whether you buy governance or not.
The dollar amount is the number everyone sees first.
It's not the only value. It may not be the most important one.
Before Cadence, the $1.7M didn't exist, not as a number anyone had measured, documented, or could present. After one cycle, your organization can see the economic shape of its continuation population. Previously invisible. Now measured. You can't govern what you haven't measured. GSV makes the invisible measurable, and that measurement is the precondition for every strategic decision that follows, whether you act on it or not.
2,300 members were reviewed by a qualified human who confirmed their therapy should continue. That's the DAR. Documented clinical appropriateness. The 60% who received Adjust, Taper, or Switch recommendations are optimization opportunities the plan never knew existed: better dosing, safer step-downs, more appropriate alternatives. Whether or not the treating provider acts, the governance cycle surfaced them and documented them.
The absence of governance is itself a financial exposure. A board that can't answer "what is your continuation oversight posture?" carries regulatory, reputational, and fiduciary risk that grows with every unexamined dollar. Cadence doesn't eliminate that risk. It documents the fact that you looked. An organization that can produce a governance artifact on demand is in a fundamentally different position than one that can't, regardless of what the artifact contains.
What Does Each Approach Produce?
Same population. Same clinical reviewers. Same time spent. The difference is what you can show for it.
Estimated manual cost: 1,250 flagged × 92% reviewed = 1,150 × 20 min = 383 hrs × $150/hr = $57,450. You spend the time. You produce no artifact.
Investment: $95,000 (~$6.33 PMPM). GSV of $1.72M in economic activity. Now measured and newly documented.
The artifact documents what governance produced. The difference: now you hold something concrete. What comes next is up to your team.
Multi-cycle RIR and GPR data makes continuation spend modelable. Actuaries can project governance yield, calibrate reserves, and forecast trend with a variable they've never had before.
Second cycle: RIR 52% (lower. The most obvious cases were caught in cycle 1), GPR 50% (half of advisory signals persisted without compulsion), cumulative two-cycle GSV $15.7M. The artifact documents what happened and creates the dataset that predicts what happens next.
You're about to manage a population you've never had.
CMS's BALANCE Model launches GLP-1 coverage in Medicaid (May 1, 2026) and Medicare Part D (January 2027). The Medicare GLP-1 Bridge begins July 1, 2026 at $50/month copay. CMS has completed manufacturer negotiations with Eli Lilly and Novo Nordisk. States are actively applying. Millions of new continuation members are entering the system.
Every participating agency starts from zero on GLP-1 continuation governance. No baseline RIR, no documented cycle, and no artifact. That's not a weakness. It's exactly the condition Cadence was designed for.
BALANCE also requires manufacturers to provide lifestyle support programs, which creates an expectation of structured monitoring, but no governance layer exists to connect that monitoring to oversight. Meanwhile, the plans absorbing this new spend will face immediate board-level pressure to demonstrate stewardship. "We covered them because CMS told us to" is not a governance narrative.
And the persistence problem scales with access. Of those who continue, 60% showed grounds for a trajectory change when a structured review actually occurred. Published data suggests most who discontinue regain the weight, and no one governed either trajectory. More coverage without governance means more invisible inertia in both directions.
Cadence for BALANCE Entrants
Duration, dose escalation, lab gaps, and outcome absence, calibrated for GLP-1 continuation patterns.
Governance cycle designed to complement the mandatory lifestyle support requirements.
BALANCE populations have different economics than commercial. Cadence pricing for state Medicaid and Part D reflects that: lower PMPM thresholds, flexible cohort minimums, and shared-signal models where governance costs scale with the value the artifact surfaces, not fixed fees divorced from population size.
If your agency is evaluating BALANCE participation, the governance economics conversation starts before enrollment. Model the numbers →
Before your GLP-1 population arrives: identify your data extract source, designate 1–3 clinical reviewers, brief your CMO on advisory-only architecture, confirm de-identification protocol, and set a pilot start date aligned with your BALANCE enrollment timeline.
The Regulatory Horizon
BALANCE is the catalyst. It is not the ceiling. Every major regulatory vector is converging on the same requirement: prove you governed high-cost therapy continuation.
Medicare Star ratings already include medication adherence and therapy management measures. Governing continuation, not just initiation, is the natural next measure. Plans with documented governance posture will be positioned when it arrives.
State Medicaid quality measures are tightening. MCOs are already evaluated on pharmacy cost management. Structured continuation governance provides the documentation trail that "we managed it" currently lacks.
IRA drug pricing provisions create new incentives for plans to demonstrate appropriate utilization. When CMS negotiates prices, plans that can document structured governance of those same drugs hold a different position than plans that can't.
Over 40 states have enacted or proposed PBM transparency legislation. Formulary accountability, rebate transparency, and clinical justification requirements are all accelerating toward a world where "prove you governed continuation" is a contractual obligation.
None of these individually mandate what Cadence does. Collectively, they converge on it. The question is not whether continuation governance becomes a regulatory expectation. The question is whether you want to build it under pressure, or present eight cycles of audited data when the requirement arrives.
GLP-1 was the proof of concept.
The governance gap runs across every therapy class.
Every high-cost chronic therapy category in your book has the same structural problem: patients continue indefinitely, the spend accumulates, and no one governs the ongoing state of that continuation. The architecture works across all of them.
TNF, JAK, IL-17/IL-23 inhibitors. Patients continue for years after achieving remission with no structured governance checkpoint. Biosimilar availability creates trajectory change opportunities that go unexercised without a governance cycle.
Maintenance immunotherapy, CDK4/6 inhibitors, PARP inhibitors. "Indefinite continuation" is standard of care, but indefinite without reassessment means no documentation of whether maintenance is still clinically indicated.
Long-acting injectable antipsychotics, adult ADHD stimulants, buprenorphine maintenance. Some of the longest continuation durations in pharmacy, patients on LAIs for decades with no governance touchpoint beyond refill.
Post-administration monitoring for CAR-T, gene replacement, and cell therapy. One-time treatments with no standardized governance of long-term follow-on. A new category of continuation that didn't exist five years ago.
Branded biologics with available biosimilars, never assessed for switch. Among flagged members, 72% received a trajectory change recommendation: the highest RIR of any trigger category.
68 influenced cases. ~$1.2M estimated annual cost differential. Nationally, adalimumab biosimilar adoption remains in the low single digits, not because patients aren't candidates, but because no one is conducting the assessment.
GLP-1 is entry. A 100,000-member plan, fully mapped across the ungoverned continuation landscape:
Modeled for a 100K-member plan. Member counts and spend estimates based on published utilization rates and average therapy costs. The governance architecture is identical across all categories. The configuration changes. The cycle doesn't.
The structural condition: authorized access without structured reassessment. It is not unique to pharmacy. Intensive outpatient programs, specialist referral pathways, and durable medical equipment all exhibit the same pattern: continuation by default, cost by inertia. The governance cycle was designed for any domain where cost accrues without documented reassessment.
You're paying for every refill.
Nobody is checking whether they're still working.
You have employees on high-cost chronic therapy (GLP-1s, biologics, oncology maintenance) costing tens of thousands to hundreds of thousands per year. Without structured reassessment, those therapies become lifetime subscriptions by default. The refills process, the spend hits your claims report, and not a single player in your vendor ecosystem asks the obvious question: is the spend still justified?
Your TPA processed the claims correctly. Your PBM negotiated the rebates. Your stop-loss carrier priced the catastrophic layer. Everyone did their job. But none of those jobs include looking at a member who's been on a GLP-1 for 14 months and asking: has this therapy been reassessed since initiation? Is anyone checking whether the BMI is still responding? Whether labs have been ordered? Whether the dose that was escalated six months ago ever produced a result?
The answer, for most self-funded plans, is no. Not because anyone failed. Because the system wasn't designed to do it. PA checks eligibility. UM intervenes when something goes wrong.
Nobody governs the quiet middle where a member continues, month after month, through a system designed to approve and step aside.
See why this gap exists across every vendor stack →
The governance gap is 79% wider where PA doesn't exist. If your plan negotiated broad formulary access without PA requirements, your members may have zero structured oversight infrastructure for continuation therapy.
Cadence fills that gap. An external, structured oversight process that produces the one document your benefits team can't generate today.
See your numbers.
Three outputs. Your unmanaged spend, your governance cost, and the ratio between them.
For every $1 you spend on governance, you gain structured oversight of $200 in previously unmanaged continuation spend.
The stop-loss conversation nobody is having.
Your stop-loss carrier prices risk based on what they can see about how you manage your plan. Right now, they see your claims history. They see your catastrophic cases. They see your network discounts and your PBM contract. They do not see governance. Because no plan sponsor has ever produced a governance credential for their continuation book.
Structured reassessment identifies members on subtherapeutic doses, stalled therapies, and monitoring gaps, the clinical conditions that generate avoidable downstream spend.
Now imagine your next renewal conversation. Your broker slides the governance artifact across the table: "Our client implemented structured continuation governance across their specialty population last quarter. Here's the configuration. Here's the measured influence rate: 60% of reviewed members had a trajectory change recommendation. Here's the audit trail. Here's the 40% with Documented Appropriateness Rates: a qualified reviewer confirmed every one of those members is on the right therapy at the right dose. This plan governs what happens after authorization."
That's a conversation no one else is having with their stop-loss carrier.
The question isn't whether this has value. The question is how much a carrier discounts a plan sponsor who can prove active management of their most expensive members.
Ask your broker how demonstrable oversight of continuation populations factors into your next stop-loss conversation.
If your stop-loss renewal is within 180 days, a governance cycle can produce an artifact before your broker meets the underwriter.
The Cadence Governance Certificate formalizes this conversation. A standardized credential: governance scope, influence rate, audit integrity, renewal alignment. That your broker hands the carrier at renewal. It's the document that makes governance visible to the underwriter for the first time. Learn more →
Better for your employees too.
Without governance: a member continues on a therapy for 18 months. If they have PA, they hit a renewal wall and get a surprise denial. They call your benefits team angry and confused. If they don't have PA, no one ever looks: the refill processes indefinitely and the spend grows until someone notices a trend line. Either way, your HR director is fielding a problem that governance would have prevented.
With governance: the governance cycle flags the member proactively. An advisory signal reaches the provider; not a denial, a conversation prompt. The member gets a reassessment visit instead of a denial letter. If the therapy is still appropriate, it's confirmed and documented. If it's not, the conversation happens with a clinician, not an authorization system.
Governance replaces surprises with conversations. Your employees don't feel the governance. They feel the absence of the surprise that would have happened without it.
The fiduciary question.
ERISA requires prudent stewardship of plan assets. Your plan is spending millions annually on specialty continuation therapies. If a board member, an auditor, or a plaintiff's attorney ever asks "what structured oversight did you have for your highest-cost continuing therapies?" — there is currently no answer. Not a bad answer. No answer. Because no process exists to produce it.
Cadence produces the answer. A documented record with defined parameters, an influence rate, a sealed review log, and a governance narrative. That's the document you want to exist before anyone asks for it.
Cadence operates alongside your existing TPA and PBM. We don't replace either. We don't integrate with either. One de-identified claims data extract. The same format your analytics vendors already receive. One 90-day governance cycle. One artifact.
For benefits consultants and brokers
If you're advising self-funded employers, this is the conversation their competitors aren't having with their stop-loss carriers. A governance credential backed by measured data, not a vendor promise. Bring it to your book.
This is what you'd hold.
A sealed, versioned governance record with a presentation-ready executive summary. It proves that continuation was governed at the population level. One 90-day cycle produces it. Nothing in your current vendor stack does.
Triggers: DUR12, NOOUT, DOSEUP | Thresholds: duration >12mo, no outcome Δ 6mo, dose ↑ w/o response
Reviewer qualification: PharmD | Review window: weeks 3–10
Version: v1.1.0 | Fingerprint: 8a3f…c291
| Member | Trigger(s) | Reviewer | Outcome | Rationale |
|---|---|---|---|---|
| M-04271 | DUR12, NOOUT | Dr. K.L. | Adjust | 14mo semaglutide 2.4mg, BMI plateau 8mo, no labs 10mo. Dose reassessment warranted. |
| M-11839 | DUR12 | Dr. R.M. | Continue | 18mo tirzepatide 5mg, A1c 6.4 stable, weight −12%. Continuation appropriate. |
| M-08456 | DOSEUP, NOOUT | Dr. K.L. | Taper | 22mo semaglutide, escalated 1mg→2.4mg, no further BMI Δ. Planned step-down indicated. |
This cycle assessed 25,000 members continuing on GLP-1 agonist therapy. Of 6,250 members meeting configured trigger thresholds, 5,750 received structured governance reviews by qualified clinical reviewers. 60% of reviewed cases showed trajectory change recommendations. 40% were confirmed as clinically appropriate continuation. Documented with identical rigor. All reviews were advisory-only. No denials were issued. No clinical authority was exercised.
This governance cycle was conducted under advisory-only methodology. No denial authority was exercised. Configuration fingerprint, trigger thresholds, and review parameters were predetermined and versioned. The audit trail is immutable. Process integrity confirmed.
Representative format using first governance cycle data. De-identified. The full artifact includes multi-cycle GPR tracking and a complete governance narrative.
If every answer confirms active governance: Continue. Documented. If one or more reveals no one has looked. That's where trajectory change emerges.
Illustrative. CGS v1.1 requires seven trigger categories; see the Standard section for the full specification.
Your organization doesn't have this document today. After one 90-day cycle, you would.
The artifact documents what governance found. A 60% RIR is intelligence: it tells you where trajectory changes are warranted. A 20% RIR is validation: 80% of your continuation population was confirmed appropriate by an independent reviewer. Both produce a sealed record. Both produce a Governance Certificate. You don't pay your auditor because they found problems. You pay them because they documented that they looked.
The artifact arrives. Here's what happens with it.
Present findings to P&T or benefits steering committee. The artifact IS the presentation. Configuration, outcomes, narrative. Share the RIR benchmark with your PBM as a baseline they've never had.
Adjust-flagged members may warrant PBM clinical outreach. Taper signals align with existing step therapy protocols. Switch flags surface formulary optimization. The artifact tells you WHERE to look. Your infrastructure decides what to do.
GPR measures whether signals persisted. The 40% who continued get re-examined. Configuration versions forward. The artifact becomes a governance record that accumulates. Not a report. A history.
Widen the cohort: biosimilar-eligible populations, GLP-1 continuation members post-BALANCE, members approaching PA renewal with no reassessment history. Each expansion adds to the artifact without restarting.
The artifact positions you for three conversations you couldn't have before: stop-loss renewal, regulatory readiness, and fiduciary defense.
If you're a clinician who received an advisory signal from a Cadence governance cycle, through your payer partner, PBM, or health plan, this is what it means.
A qualified clinical reviewer examined your patient's continuation trajectory using structured governance criteria. The signal you received is advisory. It does not override your clinical judgment, compel any action, or carry denial authority. You retain full prescribing authority. The signal is a prompt for reassessment, not a directive for change.
If you independently determine the current therapy is appropriate, that's a Continue outcome. If you independently arrive at a trajectory change, that's measured as governance persistence (GPR). Both outcomes are valuable. Both prove the system works.
We measure whether the signal correlates with trajectory change. We never claim it caused it. Your clinical decision is yours. The governance cycle makes sure someone asked the question.
Your board will ask. Your carrier will ask.
This is the document you hand them.
No standardized credential for continuation governance exists. Nobody has published one. The Cadence Governance Certificate is the first: a third-party attestation that structured governance occurred, under a published standard, with measured outcomes and an immutable audit trail.
Plan leadership has no documented proof that anyone assessed whether ongoing therapies are still clinically appropriate. Your stop-loss carrier prices risk without seeing governance, or the cost risk that sits inside an unmonitored book.
For a plan fiduciary, the certificate is proof of prudence: documented evidence that structured oversight occurred, not a promise that it will. For your stop-loss carrier, it is the first independent documentation they have seen that continuation spend is being governed at the population level. They price risk on what they can verify. This is verifiable.
When regulators mandate continuation oversight, which BALANCE makes more likely, not less, you’ll need operational history, not a fresh start.
After completing a governance cycle, a self-funded employer receives a certificate attesting to:
How the renewal conversation changes.
Without the certificate: "Here's our claims history. Here's our network. Here's our PBM contract." The carrier prices on history and hope.
With the certificate: "Our client implemented structured continuation governance across their specialty population. Here's the measured influence rate: 60% trajectory change, 40% DAR. Here's the audit trail. Here's the configuration fingerprint. This plan manages outcomes, not just claims." The carrier sees documented governance over the most expensive members on the book. That's a first.
What the certificate looks like.
Triggers: DUR12, dose, labs, comorbidity, NOOUT, BIOSIM, LABGAP
Config fingerprint: v1.1-2026Q3-SHA256
DAR: 40% (documented appropriateness)
Outcome distribution: 25/20/15/40
Reviewers: 3 credentialed PharmDs
Methodology: Cadence Standard v1.1
Contract year alignment: 2026–2027
Certificate valid through: Q3 2027
Sample certificate with pilot reference data. Your certificate reflects your cohort, your cycle, your measured outcomes.
Why Cadence can issue it.
Cadence built the methodology, operates the cycle, and published the standard. The credential follows from the work. The Governance Certificate is the first standardized, auditable credential for continuation populations, issued against a published eight-section standard that defines what a valid governance cycle is.
For stop-loss underwriters: A 10,000-member employer with $36M in continuation exposure and a governance cycle at $6 PMPM ($180K). A 1–2% stop-loss term improvement represents $30,000–$80,000 in annual premium value. The certificate documents measured governance (RIR, DAR, case counts, reviewer credentials) that the underwriter can evaluate. Directional estimate. Premium impact depends on carrier-specific underwriting criteria.
Certification is included with every completed governance cycle.
No standard for continuation governance existed.
So we wrote one.
CGS v1.1 is published, versioned, and open to scrutiny. The artifact is produced against it. The certificate is issued against it. Every governance cycle is measured against it.
NCQA didn't wait for permission to accredit health plans. HEDIS didn't wait for a mandate to define quality measurement. They built the standard and the market followed. CGS v1.1 is the first published methodology for structured, advisory-only continuation governance at population scale.
The product doesn't fit an existing budget category. It creates one. The standard defines what belongs there.
What the standard defines.
Why publishing the standard matters.
You're not buying a black box. The methodology is published, versioned, and open to scrutiny. Your legal team can review it. Your CMO can evaluate the clinical architecture. Your actuary can verify the formulas. The standard survives scrutiny because it was designed to.
When CMS, NCQA, or state regulators ask "what does structured continuation governance look like?" — this is the reference. The standard exists before the mandate. Organizations adopting it now are building governance infrastructure ahead of the regulatory curve.
The first governance cycle against CGS v1.1 sets the benchmark. The fifth cycle is measured against it. By the tenth, Cadence owns the only governance dataset in existence. Every engagement adds to the dataset. The dataset strengthens the standard.
Anyone who wants to compete now has two options: adopt the Cadence standard (which validates us) or create their own (which fragments the market and makes ours the incumbent). Either way, the first-mover advantage compounds.
We wrote the standard, certify compliance, issue the credential, and hold the benchmark.
Any organization can run a CGS-compliant governance cycle.
Only Cadence certifies it.
CGS v1.1 is published. The Governance Certificate is a defined credential. Every cycle certified adds to the benchmark. The organizations that certify first shape the field.
One cohort. One data extract. One 90-day cycle.
You hold the artifact.
Here's how a governance cycle works, what your team needs to do, and what you hold at the end.
Cadence has completed two governance cycles across 34,500 members and independently validated the methodology on 30,734 patients in the NIH All of Us Research Program. Accepting pilot engagements for Q3 2026. Methodology paper under peer review at JMCP.
Technology handles flagging, queue prioritization, and clinical data assembly: the entire 25,000 → 6,250 → 5,750 funnel. No human touches a case until the review step. The measured time per case: ~4 minutes. The reviewer examines an assembled clinical picture and makes a single governance determination.
5,750 reviews over 90 days = ~64 cases/day. At ~120 cases/PharmD/day, that's under 1 FTE, with a second reviewer required by CGS v1.1 for independence. An algorithm can flag. It can't govern. The human judgment is the product; not the bottleneck.
Repeatable. Each cycle compounds governance value.
4 required. 6 optional fields enrich the clinical signal. No EHR integration.
As little as 1% of therapy cost. Scales with cohort size.
Configuration fingerprint, governance summary, RIR benchmark, audit trail, transparency report, and governance certificate.
The first cycle is a governance diagnostic. You see the gap, you hold the artifact, your carrier sees the certificate at renewal. What happens when the governance stays on is a different question — and a different engagement model. The second cycle compounds. The third becomes predictive.
Click a phase for details.
Takes you to our contact form. We respond within one business day.
Facilitated Governance Assessment: your governance posture, scored against your actual claims data.
You send a limited dataset: same four fields, one quarter of data. We run an objective governance assessment against your actual claims patterns and deliver a scored report with specific gaps identified. The output is a document your internal champion can take to leadership with a concrete recommendation.
Most organizations that run the facilitated assessment decide the pilot is obvious. Request details →
Most organizations start at Step 1 or Step 2. Some go directly to Step 3.
The Cadence Governance Benchmark
Every client's anonymized governance data contributes to the industry's first continuation governance benchmark. Your Cycle 1 artifact measures your organization. By Cycle 3, it measures your organization against the field.
The first organizations to run governance cycles shape the standard everyone else will be measured against. A plan with eight cycles of data under the standard is making a claim no competitor can replicate in a quarter. The first-mover advantage compounds every 90 days. Benchmark data is anonymized and aggregated across all participating organizations.
What week 1 looks like.
You said yes. Here's what happens next, and what your team needs to do (it's not much).
Total time investment for your team: approximately 4 hours across the first week. After that, zero until artifact delivery.
Who needs to be in the room.
Cadence deploys into institutions, not individuals. The internal conversation typically maps like this.
VP of Medical Management, CMO, or VP of Benefits (employer). The person who owns the gap and can authorize a pilot.
Pharmacy director, clinical review team, analytics lead. The people who interact with the cycle, outputs, or data.
Legal/compliance, clinical leadership, finance. They don't use it. They bless it.
How to judge us.
We don't define success and then claim we met it. Here's how Cadence should be evaluated. By you, not by us.
We are not afraid of measurement. The entire product exists to produce it.
What happens after you hold the artifact.
The governance cycle ends. Your work begins. Now it's targeted.
Your medical director presents the artifact documenting structured oversight of 25,000 continuation members. No other document in the room does that.
Your broker slides the Governance Certificate across the table. The underwriter sees documented oversight. The renewal conversation shifts.
You sit down with your PBM armed with a measured governance signal they've never seen. "60% of reviewed members had a trajectory change recommendation. What are you doing about it?"
Present the artifact. Configuration fingerprint, outcome distribution, governance narrative. It's board-ready. Share the RIR benchmark with your PBM or clinical partner as a baseline they've never had. This is the conversation where governance becomes visible.
Act on the signal. Members flagged as Adjust may warrant provider outreach through your PBM's existing clinical programs. Taper signals align with step therapy protocols already on the books but never triggered for continuation. Switch flags surface formulary optimization opportunities that were never on anyone's radar. Cadence tells you where to look. Your infrastructure decides what to do. Purely advisory, always.
Run Cycle 2. GPR measures whether the signals persisted. The 40% who continued get re-examined. Still appropriate, or has the clinical picture changed? The trigger settings versions forward. This is where the artifact becomes a governance identity, not a report.
Expand the cohort. Cycle 1 starts with your highest-cost specialty members. Subsequent cycles widen: biosimilar-eligible populations, GLP-1 continuation post-BALANCE, members approaching PA renewal with no reassessment history. Each expansion adds to the artifact without restarting the process.
Some organizations know exactly what to do with the artifact. Others, especially self-funded employers without internal clinical infrastructure, need a translator. The Governance Strategy Advisory is a 2–4 hour engagement per cycle delivering a one-page strategy memo alongside the artifact.
$5,000–$10,000 flat fee per cycle. The governance cycle produces the artifact. The strategy advisory produces the playbook for what to do with it.
After your governance cycle, receive the Cadence Governance Certificate: the standardized credential your broker hands the carrier at stop-loss renewal.
Cadence governance cycles produce signals that reach providers. Providers act on those signals. That's how GPR persistence happens. But today, providers have zero visibility into the aggregate governance picture.
The Cadence Provider Mirror will give large specialty practices and health systems their own governance posture: panel-level continuation patterns, response rates to advisory signals, and peer-relative benchmarks. Payers pay for the governance cycle. Providers see the mirror. The benchmark sits between them.
Four products. One data architecture. Organizations running governance cycles today are building the dataset that makes the network possible.
Takes you to our contact form. We respond within one business day.
Pick any member on your book.
We'll show you what's missing.
The spend that's accrued, the reassessment that never happened, and the document that doesn't exist. One member at a time.
Real or typical. Six questions, ninety seconds.
The Governance Simulator
The Autopsy shows the gap. The Simulator shows the process. Walk through what happens when a member enters a Cadence governance cycle: the clinical picture, the triggers, the determination.
Select a therapy class. Review the case. Make the call.
The hardest questions we hear.
Answered without hedging.
35 questions across four audiences. Search or scroll. Every answer is the real answer.
Every term on this site, defined once.
If a word is unfamiliar, it's here.
Reviewer Influence Rate. The percentage of reviewed cases where structured reassessment correlated with a trajectory change (Adjust, Taper, or Switch). Calculated as (Adjust + Taper + Switch) ÷ Completed Reviews × 100. Correlational, not causal — by design. First-cycle measured rate: 60%.
See it in Metrics & Evidence →Governance Persistence Rate. The percentage of influenced cases where the trajectory change persisted into the next governance cycle, measured without compulsion. Proves the advisory signal was clinically sound, not just procedurally generated. Cycle 2 measured rate: 50%.
See it in Metrics & Evidence →Governance Signal Value. The directional economic signal surfaced by a governance cycle. Calculated as the sum of influenced cases × ATC × Therapy Adjustment Factor by outcome type. First-cycle GSV: $8.6M. GSV is not a savings projection. It's the economic shape of what governance made visible.
See it in Metrics & Evidence →Governance Readiness Score. A 12-question self-assessment across five dimensions (Cadence, Review Process, Measurement, Audit, Configuration) that produces a 0–100 score and a maturity level (L1–L5). The facilitated version uses actual claims data for an objective assessment.
Take the GRS →Average Therapy Cost. The annualized per-member cost of the therapy under governance review. Pilot baseline: $7,242. Used as the multiplier in GSV calculations and the denominator in PMPM-as-percentage-of-therapy-cost analysis.
See it in Economics →Therapy Adjustment Factor. The estimated fraction of ATC affected by each governance outcome. Adjust: 0.25 (dose modification). Taper: 0.50 (planned step-down). Switch: 0.30 (cost differential to alternative). These are conservative, directional weights, not actuarial projections.
See it in the RIR Lab →The complete deliverable produced at the end of each governance cycle. Contains the configuration fingerprint, outcome distribution, RIR benchmark, immutable audit trail, and governance narrative. The documented answer to: "What is your continuation oversight posture?" No other product in this space produces this document.
See the Artifact →A complete, versioned, tamper-evident record of every governance parameter: which therapies are in scope, what triggers flag a member for review, what thresholds apply, who is qualified to review, and when the review window opens. The hash fingerprint changes if any parameter changes, proving the review wasn't arbitrary and making two cycles directly comparable.
See it in the Artifact →The condition where patients remain on therapy not because a clinician reviewed their case and determined continuation was appropriate, but because the prescription was written, the refill was authorized, and nobody ever circled back. It's not fraud, waste, or abuse. It's what happens when a system heavily governs initiation and then looks away. Visible inertia is governable. Invisible inertia is not.
See it in Structural Condition →The three-part diagnostic that proves the governance gap exists: (1) PA governs initiation, not continuation. (2) PBMs manage formulary, not ongoing clinical appropriateness. (3) Internal review triggers the duty to act, making governance structurally impossible from inside. These three conditions are why the gap has no owner.
See the full thesis →The legal and regulatory obligation that activates the moment an internal team identifies a clinical concern. A plan's own medical director who flags a member on 18 months of semaglutide with no measurable response can't just document it. They're a covered entity with fiduciary obligations and licensure standards. Their governance review has just become a utilization management action. This is why continuation governance must be external and advisory-only.
See it in Shape Thesis →The percentage of reviewed cases where a qualified clinical reviewer confirmed continuation is clinically appropriate: 40% in the first governance cycle (2,300 of 5,750). Complement of RIR: DAR + RIR = 100%, always. This is a positive clinical quality statement, not just evidence the system is neutral. A CMO who can say "we have documented clinical appropriateness findings on 2,300 members" is making a claim no PBM report, PA log, or spend analysis can produce. HEDIS-adjacent. Star Rating-adjacent.
See it in Metrics →One complete 90-day governance cycle: cohort ingestion → trigger scan → case queue → advisory review → outcome monitoring → RIR calculation → governance summary. Each cycle produces an artifact. Each subsequent cycle compounds governance value through GPR tracking and configuration versioning.
See the Seven-Step Loop →The percentage of the total cohort that meets one or more configured trigger thresholds and enters the governance queue. Pilot rate: 25% (6,250 of 25,000). A flag is not a finding. It's an invitation for structured review.
See it in the Loop →The percentage of flagged members that receive a completed governance review within the cycle. Pilot rate: 92% (5,750 of 6,250). The 8% gap reflects cases where data was insufficient for a determination. Those are documented as incomplete, never excluded.
See it in Metrics →The four governance outcomes: Continue: therapy confirmed appropriate, documented. Adjust: dose, monitoring, or clinical approach warrants reassessment. Taper: planned step-down pathway identified. Switch: alternative therapy warranted. Every outcome requires a clinical note. Every outcome is advisory.
See it in the Artifact →The structural boundary that makes Cadence deployable where every other approach gets killed in committee. Cadence produces recommendations, never denials. No clinical authority is exercised. No utilization management action is triggered. The plan decides what to do with the governance signal through their existing channels, if anything. This boundary is absolute.
See it in Shape Thesis →An objective governance assessment using your actual claims data (PA renewal patterns, reassessment gaps, therapy duration distributions) rather than self-reported answers. One-time engagement ($15K–$25K flat fee) that produces a scored report with specific gaps identified. The precursor engagement that makes the pilot decision obvious.
See it in Pilot →The measurable output of a structured reassessment cycle. When 60% of reviewed members receive a trajectory change recommendation, that's a governance signal: evidence that structured review produced measurable intelligence. The signal is measured by RIR, valued by GSV, and tracked over time by GPR.
See it in Metrics →Spend that governance can see, measure, and document. Before Cadence, continuation spend exists as a line item in claims data but produces no governance artifact. After one cycle, the economic shape of that spend (who's continuing, at what cost, with what clinical trajectory) becomes newly visible. GSV quantifies it. The artifact documents it.
See it in Economics →The direction a member's therapy is heading: continuation at current parameters, modification (dose, monitoring, or approach), step-down, or switch to an alternative. Governance doesn't determine the trajectory. It makes it visible so someone can ask whether it reflects a deliberate clinical decision or an unchanged default.
See it in Structural Condition →The practice of timestamping and fingerprinting every governance parameter at the start of each cycle so results are reproducible and auditable. If the configuration changes between Cycle 1 and Cycle 2, both versions are preserved, proving the review parameters weren't adjusted after the fact to produce a desired outcome.
See it in the Artifact →Every number has a source, a method, and a confidence tier.
Two governance cycles with clinical review. One independent national validation. 65,000 patients across four therapy categories. Here's what was measured, how it was measured, and where the boundaries are.
Cohorts 1 and 2 were conducted by the founder prior to Cadence's formation. We state this plainly because it matters. Cohort 3 is independent: a retrospective analysis of 30,734 patients in the NIH All of Us Research Program, using EHR-linked data, with no Cadence reviewer involved. The flag rate converged at 29.1%. A methodology manuscript has been submitted for peer review at JMCP. A validation manuscript using All of Us data is in preparation.
Independent Literature Context
The following estimates are derived entirely from published, peer-reviewed, and publicly available sources. No Cadence proprietary data is used as an input. Cadence cohort findings appear in the comparison column only.
A note on rigor: Cohorts 1 and 2 (34,500 members, two payer types, four therapy categories) were governed under the Cadence Governance Standard v1.1 with human clinical review. Cohort 3 (30,734 patients, NIH All of Us) applied the same trigger methodology computationally to independent EHR data. All numbers are real, measured, and internally consistent. The governance gap converged across all three populations. We publish the methodology, the formulas, and the falsification criteria because they survive scrutiny. If you find an error, we want to know.
The evidence base is actively expanding through three parallel tracks:
Methodology manuscript submitted for peer review at JMCP. Additional policy and reform commentaries have been submitted to other journals.
Academic collaboration outreach to independent researchers with institutional claims data access. Goal: apply CGS trigger logic to a population the founder never touched. If the governance shape converges, the finding is independently reproducible.
NIH All of Us Researcher Workbench. 30,734 patients analyzed across four therapy categories using EHR-linked medication and laboratory data. Flag rate: 29.1%. Zero-monitoring prevalence: 22.3%. Validation manuscript in preparation.
The population-level validation is complete. The methodology paper is under peer review. The first external pilot engagement will produce the fourth independent dataset and the first client-facing governance artifact under the standard.
Three independent populations.
The same pattern appeared.
Why high-cost chronic therapies are structurally ungoverned after initiation, and what three independent analyses revealed
White Paper v5.2 · March 2026 · Cadence, LLC · showyourwork.health
A note on versioning. This white paper, like all Cadence materials, is a living document. As we conduct additional governance cycles, refine our methodology, and incorporate feedback from pilot engagements, the data and analysis presented here may be updated. Version numbers are tracked. Previously published versions remain available upon request. Every number in every version is classified by confidence tier (Measured, Derived, or Directional) and is traceable to its source methodology. We update because the data warrants it, not to revise the narrative.
High-cost ongoing treatments are heavily governed at initiation and structurally ungoverned during continuation. Prior authorization checks eligibility. Utilization management intervenes on individuals. Claims analytics reports spend. No existing instrument assesses whether patients continuing on expensive therapies are doing so under active clinical oversight or through therapeutic inertia: the passive renewal of prescriptions that no one has reassessed.
This paper describes the structural conditions that produce the continuation governance gap, explains why internal teams cannot fill it, and presents measured outcomes from two governance cycles (34,500 members, human clinical review) alongside an independent validation (30,734 patients, NIH All of Us, computational analysis). The governance shape was consistent across all three populations: 25-29% flag rates, and where clinical review was conducted, 58-60% warranted trajectory change recommendations.
The methodology is codified in the Cadence Governance Standard (CGS v1.1), an eight-section published specification that defines what constitutes a valid continuation governance cycle. CGS v1.1 requires seven trigger categories, including BIOSIM and LABGAP triggers that emerged from second-cohort findings. Evidence is classified across three confidence tiers: Measured, Derived, and Directional. Falsification criteria are published.
I. The Problem
Every specialty therapy begins as a transaction: authorized, justified, documented at the point of initiation. Then it becomes an annuity. The refills process, the spend accrues, and the system that approved initiation steps aside. Discontinuation requires an event: an adverse reaction, a formulary change, a loss of coverage. Continuation requires nothing.
Prior authorization may renew once a year. When it does, it checks whether the patient still meets the eligibility criteria for the drug. Is the diagnosis present? Is the prescriber authorized? Is the drug on formulary? It does not ask whether the therapy is still producing outcomes. It does not ask whether the dose that was escalated six months ago ever produced a response. It does not ask whether anyone has looked at this patient's clinical trajectory since initiation. PA renewal is an access gate. It was never designed to be a governance instrument.
The result is a population-level blind spot. Patients continue for months or years with no structured reassessment of whether their therapy is still clinically appropriate. The cost compounds and the refills process. And no documented record exists to show that the ongoing state of continuation was ever examined.
The Three-Condition Diagnostic
The governance gap persists because three conditions are simultaneously true.
No structured reassessment interval for continuation. PA renewal checks eligibility at 6–12 month intervals. Between renewals, refills process automatically. Some specialty agents have no PA requirement at all, particularly at large self-funded employers who negotiate broad formulary access. For those members, there is literally zero structured touchpoint for continuation oversight.
Governance signals are dispersed across systems. Claims show utilization, labs show response, and provider notes show rationale. Pharmacy data shows fill patterns. No single system assembles these into a structured reassessment prompt. The intelligence exists. The assembly does not.
Eligibility tools are performing governance work they were never designed to do. Step therapy, PA renewal, and utilization management referral are access and eligibility instruments. PA is binary: approved or denied. UM is intervention-grade. Neither produces a documented governance cycle, a measured influence metric, or an artifact showing that continuation was assessed at the population level.
The Duty-to-Act Barrier
The structural gap cannot be filled internally. The moment an internal medical director identifies a member on 18 months of semaglutide with no measurable response, fiduciary obligations compel intervention. The governance review becomes a utilization management action, triggering P&T committee review, legal consultation, provider relations protocols, and member grievance procedures. The clinical team's identification of a continuation concern creates the obligation to act on it.
This is the duty to act. It is the reason no payer has built structured continuation governance internally at population scale. It is not a matter of capability. It is a matter of structural incentives. An external, advisory-only governance layer does not carry that obligation. It documents, measures, and produces an artifact. The plan retains full discretion over what to do with the signal through existing channels, if anything. That structural separation is what makes the governance layer possible.
Why This Matters Now
CMS's BALANCE Model launches GLP-1 coverage for Medicaid in May 2026 and Medicare Part D in January 2027, with a bridge demonstration beginning July 2026. Millions of new continuation members will enter the system. The plans managing these populations have no governance infrastructure for continuation. They will need one.
For self-funded employers, the pressure is indirect but real. As GLP-1 utilization normalizes across public programs, commercial scrutiny of continuation spend intensifies. Boards and benefits committees that could defer the governance question in 2024 will be asked about it in 2026. The regulatory convergence: Star Ratings, Medicaid quality measures, IRA pricing provisions, state PBM transparency reforms is not mandating continuation governance yet, but it is converging on it.
The Consolidated Appropriations Act of 2026, signed in February 2026, requires PBMs to pass through 100 percent of rebates and mandates new transparency and disclosure obligations for employer health plans. This legislation increases fiduciary scrutiny of pharmacy benefit decisions but does not address whether ongoing therapies are still clinically appropriate. Continuation governance fills the structural gap that PBM reform exposes but does not close.
II. A Structured Approach
The methodology described here is codified in the Cadence Governance Standard (CGS v1.1), an eight-section published specification. Any organization, the authors or otherwise, can conduct a governance cycle that meets this standard. What follows is a summary of the approach and its constraints.
The Governance Cycle
A single governance cycle runs 90–120 days. The input is a de-identified claims extract: four required fields (member identifier, drug, dose, therapy start date) and up to six optional enrichment fields. Technology-assisted flagging identifies members meeting configured trigger thresholds. Qualified clinical reviewers — PharmD, MD, APRN, or PA — external to the plan sponsor, examine each flagged case and assign one of four outcomes: Continue, Adjust, Taper, or Switch. Every determination is recorded in an audit trail. The output is a governance artifact: a sealed, versioned document containing the governance parameters, outcome distribution, measured metrics, and documented case log.
The Trigger Architecture
Seven trigger types are required for a CGS-compliant cycle: duration on therapy exceeding twelve months without documented reassessment (DUR12), dose escalation without corresponding outcome improvement, laboratory monitoring gaps, new or changed comorbidities affecting therapy appropriateness, absence of measurable outcome in the trailing period (NOOUT), biosimilar available but not considered (BIOSIM), and monitoring labs overdue beyond 90 days (LABGAP). All seven must be configured. Additional triggers are permitted. All thresholds are documented in the review configuration and locked before the cycle begins. BIOSIM and LABGAP were added to CGS v1.1 after second-cohort findings demonstrated their clinical and economic significance.
The Outcome Taxonomy
Four outcomes with no fifth category. Continue means a qualified reviewer confirmed that continuation is clinically appropriate at the current dose for this member at this time. Adjust means dose modification is indicated. Taper means a planned step-down pathway was identified. Switch means an alternative therapy is warranted. Every outcome is advisory. No outcome compels provider action. The Reviewer Influence Rate (RIR) is the sum of Adjust, Taper, and Switch as a percentage of completed reviews. The Documented Appropriateness Rate (DAR) is the Continue outcome as a percentage of completed reviews. RIR plus DAR equals 100 percent, always.
The Constraints
No clinical override. No denial authority and no compulsion mechanism. The governance cycle produces a signal and an artifact. It never overrides clinical judgment.
External reviewers. Minimum two credentialed reviewers per cycle, external to the plan sponsor. No reviewer adjudicates more than 40 percent of cases.
Review record. Sealed at cycle close with no post-seal modification permitted.
Governance setup. Versioned and locked before the cycle begins. Any mid-cycle change voids the fingerprint and requires re-versioning with documented justification.
Documented neutrality. The DAR is not a residual. It is a positive clinical quality statement. A governance cycle that produces a 40% DAR has documented that 40 percent of its reviewed population is on the right therapy at the right dose, confirmed by a qualified human reviewer, member by member.
The Standard and the Certifier
The methodology is codified in CGS v1.1, a published specification. The standard is open. Any organization can conduct a governance cycle that meets it. But a published standard requires a certifying body. The organization that authored CGS is currently the only entity that certifies compliance and issues the Governance Certificate upon cycle completion. This is a structural position, not a marketing claim: the first certifier in a new category defines the benchmark, owns the dataset, and establishes the credential that subsequent entrants either adopt or compete against. Every cycle certified adds to an industry benchmark that did not exist before this methodology was published. The governance benchmark compounds with each engagement, and the organizations that earn certification earliest shape the standard everyone else will be measured against.
III. What Two Independent Cohorts Found
The methodology was applied to three independent populations. Cohort 1 and 2 numbers are measured from governance cycles with clinical review. Cohort 3 numbers are validated from independent EHR analysis.
Cohort 1: Commercial Health Plan
25,000 members. Single commercial payer. GLP-1 agonist continuation cohort. 90-day cycle.
| Metric | Value |
|---|---|
| Cohort / Flagged / Reviewed | 25,000 / 6,250 (25%) / 5,750 (92%) |
| RIR / DAR | 60% / 40% |
| Outcome distribution | DAR 40% · Adjust 25% · Taper 20% · Switch 15% |
| GSV (TAF-weighted) | $8.6M (ATC $7,242 · TAFs: 0.25 / 0.50 / 0.30) |
| Cycle 2 GPR | 50.0% (1,726 of 3,450 persisted) |
| Cycle 2 RIR / Cumulative GSV | 52% / $15.7M |
Cohort 2: Self-Funded Employer
9,500 members. Self-funded employer plan. Multi-category: GLP-1, biologic immunologics, specialty behavioral health, oncology maintenance.
| Metric | Value |
|---|---|
| Cohort / Flagged / Reviewed | 9,500 / 2,375 (25%) / 2,185 (92%) |
| RIR / DAR | 58% / 42% |
| Outcome distribution | DAR 42% · Adjust 23% · Taper 21% · Switch 14% |
| ATC | $11,840 (higher mix: biologics + oncology) |
| GSV (TAF-weighted) | $5.3M (per-member: $557) |
| RIR by therapy class | GLP-1 56% · Biologics 61% · Behavioral 54% · Oncology 59% |
| PA blind spot finding | 62% had no PA; flag rate 34% (no PA) vs 19% (with PA) |
The Shape Holds
Different payers, different geographies, different therapy categories, yet the governance shape held. The measured pattern, 58–60% RIR paired with 40–42% DAR, converged across both populations. The outcome distributions were close but not identical: 40/25/20/15 versus 42/23/21/14. Close enough to validate the structural pattern. Different enough to demonstrate these are independent measurements, not artifacts of a single methodology applied to a single dataset.
The cross-category data from Cohort 2 is particularly significant. The RIR ranged from 54% (behavioral health) to 61% (biologics) across four therapy classes. The variation is real. Behavioral health cases have lower flag rates but higher reviewer complexity. But the pattern holds across all categories. Continuation inertia is not a GLP-1 phenomenon. It is a structural phenomenon that appears wherever this has never been looked.
The PA Blind Spot
In the employer cohort, 62% of specialty continuation members had no prior authorization. For those members, there is no existing mechanism; not PA, not UM, not PBM reporting. That triggers a reassessment. The governance gap was 79% wider in the no-PA population: a 34% flag rate where PA did not exist, versus 19% where it did. Plans that negotiated broad formulary access. No step therapy, no PA. They gave their members better benefits. They also created a larger blind spot.
BIOSIM and LABGAP: Triggers That Emerged From Data
During Cohort 2 analysis, two patterns emerged that were not captured by the original five trigger categories. BIOSIM identified members on branded biologics with available biosimilars who had never received a documented switch assessment, 72% RIR among flagged cases, with an estimated annual cost differential of approximately $1.2M for 68 influenced cases. LABGAP identified members continuing on therapies requiring laboratory monitoring who had not had relevant labs in over 10 months, 64% RIR. LABGAP is a patient safety signal before it is a governance signal. Both triggers were added to CGS v1.1 as required categories.
Evidence Classification
Every number published from these cycles falls into one of three confidence tiers.
| Tier | Definition | Examples |
|---|---|---|
| Measured | Directly observed. Counted, not modeled. | Cohort sizes, outcome counts, RIR, DAR, GPR, ATC, flag rates, completion rates, category RIR |
| Derived | Calculated from measured inputs via documented formulas. | GSV, blended TAF, per-member GSV, sensitivity ranges, cumulative GSV |
| Directional | Forward-state estimates. Assumptions explicit. | TAF weights, category expansion projections, national market sizing |
IV. Governance Metrics
Reviewer Influence Rate (RIR)
The percentage of reviewed cases where structured reassessment correlated with a trajectory change recommendation. Measured association, not claimed causation. RIR never claims the governance cycle caused the change. It measures whether reassessment and trajectory change co-occur. Cohort 1: 60%. Cohort 2: 58%.
Documented Appropriateness Rate (DAR)
The percentage of reviewed cases where a qualified clinical reviewer confirmed continuation is clinically appropriate. RIR plus DAR equals 100%, always. This is not a residual. A plan that produces a 42% DAR has documented that 42% of its reviewed population was confirmed by a qualified human reviewer to be on the right therapy at the right dose. No PBM report, PA log, or spend analysis can make that claim. The DAR is HEDIS-adjacent. It is Star Rating-adjacent. It is the number that makes a medical director comfortable, a regulator satisfied, and a board confident.
Governance Persistence Rate (GPR)
The percentage of Cycle 1 influenced cases where the trajectory change persisted into Cycle 2 without compulsion. Measured at 50% across 3,450 influenced cases: 60% persistence for dose adjustments, 45% for tapers, 40% for switches. In half the cases where a reviewer flagged a trajectory concern, the treating provider independently arrived at a similar conclusion within 90 days. No override and no denial authority. The signal persisted because it was clinically sound.
Governance Signal Value (GSV)
TAF-weighted economic signal of governance-relevant activity. Directional, not a projection. Cohort 1 produced $8.6M in GSV on 25,000 members ($345 per member, ATC $7,242). Cohort 2 produced $5.3M on 9,500 members ($557 per member, ATC $11,840, higher due to biologic and oncology mix). Cumulative two-cycle GSV for Cohort 1: $15.7M. Cross-cohort first-cycle GSV: $13.9M. The GSV makes previously invisible economic activity measurable. What the organization does with that visibility is a governance question, not an economic one.
V. What We Don't Know
A methodology earns trust by acknowledging its own boundaries. Seven open questions remain.
Causation. RIR measures correlation, not causation. The reviewer may have surfaced what the provider was already considering. Proving causation would require a randomized controlled trial, which is not feasible in a governance context.
Long-term persistence. GPR is measured at 90 days. We do not yet know 6-, 12-, or 24-month persistence curves.
Therapeutic inertia prevalence. The thesis assumes continuation inertia is widespread. If a significant proportion of continuation is actively managed through informal channels the governance cycle does not capture: undocumented provider-patient conversations, clinic-level protocols not reflected in claims: the governance gap may be narrower than the 58–60% RIR suggests. The cross-cohort data provides the first structured measurement of this question. Multi-payer replication will refine the answer.
TAF precision. Therapy Adjustment Factors (0.25/0.50/0.30) are directional estimates of economic magnitude by outcome type, not claims-derived. They are explicitly classified as Directional in all GSV calculations.
Provider response mechanisms. GPR measures whether trajectory changes persisted, not why. The mechanism is opaque.
Sample diversity. 65,234 patients across commercial payer, self-funded employer, and a nationally-recruited NIH cohort. Four therapy categories. The governance gap converged in all three. Whether it holds in Medicaid managed care, Medicare Advantage, and international populations remains to be measured.
Domain scope. The methodology has been validated in pharmacy continuation populations. Whether the structural condition: authorized access without structured reassessment, produces similar governance shapes in non-pharmacy continuation domains (intensive outpatient, specialist referral, durable medical equipment) is an open question. The governance loop is designed to be domain-agnostic, but this has not been tested.
VI. Falsification Criteria
Three conditions would prove this thesis wrong.
First: evidence that a major payer or PBM has deployed structured, recurring continuation governance with a measured influence metric and auditable configuration at population scale. We have searched NCQA databases, URAC standards, AMCP proceedings, and PBM product catalogs. Nothing fits.
Second: an industry standard that mandates periodic continuation reassessment with documented outcomes, not PA renewal, which checks eligibility, but actual governance with a measured cycle. None exists.
Third: a technology platform already occupying the layer between claims analytics and utilization management, producing the governance products with influence metrics. Cotiviti, Waystar, and Zelis are the closest adjacent platforms. None produce what this methodology produces.
These falsification criteria are published because we have looked for the evidence that would disprove us and have not found it. 65,234 patients. Commercial payer, self-funded employer, NIH national cohort. The gap converged every time. If any of the three conditions above are met, the thesis requires revision.
VII. Implications
For Health Plans
The final record is a new object. No existing process produces a documented, population-level assessment of continuation governance posture with an influence metric, a documented appropriateness rate, and a sealed audit trail. The first plans to conduct governance cycles are shaping the industry's first continuation governance benchmark. By the third cycle, their data defines the standard everyone else will be measured against.
For Self-Funded Employers
ERISA requires prudent stewardship of plan assets. Specialty continuation spend flowing with no governance touchpoint is an unmanaged fiduciary exposure. The governance artifact is the answer: a documented record that the plan exercised structured oversight over its highest-cost continuing therapies. The stop-loss application is immediate: a plan sponsor that can present a governance credential at renewal is in a fundamentally different negotiating position than one that cannot.
For the Industry
The Cadence Governance Standard (CGS v1.1) defines what a valid governance cycle is. If the industry adopts this standard as the benchmark for continuation governance, the organization that authored it becomes the certifying body. The Governance Certificate, issued upon completion of a CGS-compliant cycle, is the credential. The benchmark is the network effect. This is the trajectory of every standard-setting body in healthcare data: define the standard, certify compliance, own the benchmark.
VIII. Availability
The methodology described in this paper is available as a governance service under the name Cadence — Continuation Governance Intelligence. Cadence is a governance service, not a software platform. We operate the cycle. The organization holds the artifact.
Governance Cycle: 90-day structured cycle. One de-identified claims CSV in, one governance artifact out. Pricing: $4.50–$6.00 per governed member per month.
Facilitated Governance Assessment: Standalone half-day diagnostic. $15,000–$25,000.
Governance Certificate: Credential issued upon cycle completion. Included with the governance cycle.
Strategy Advisory: Per-cycle results review. $5,000–$10,000.
Product demonstration and interactive tools: showyourwork.health
The product site includes interactive governance calculators, therapy-mix scenario modeling with configurable cost ranges, stop-loss impact estimation, and the full CGS v1.1 specification. All evidence tables, formulas, and falsification criteria referenced in this paper are reproduced with interactive tooling.
Pilot inquiries:
Published standard: CGS v1.1, downloadable at showyourwork.health
© 2026 Cadence, LLC
showyourwork.health
The full case in one scroll.
Five minutes.
The problem, the solution, the evidence, and the ask. Every number sourced.
Your highest-cost members — GLP-1s, biologics, oncology maintenance — were authorized once. PA approved access. The refills have been processing automatically ever since, and no governance record exists for any of it. Ask anyone in your organization to produce the document proving continuation was governed last quarter. It doesn't exist.
The number of governance artifacts most plans can produce for continuation oversight: zero.
Cadence runs a structured, 90-day governance cycle on your continuation population. Advisory-only. No denials. You send a de-identified claims CSV. Four required fields, six optional fields that enrich the clinical signal. Qualified clinical reviewers make every determination. At the end, you hold a governance artifact: measured influence rate, documented case log, configuration fingerprint, and a governance narrative your C-suite can read.
The signal without the handcuffs: we don't replace your PBM, integrate with your systems, or trigger the compliance machinery that makes internal governance impossible. Always advisory. No other process produces a governance record with an influence rate, documented appropriateness findings, and a sealed review log.
No health plan has built this internally. The moment an internal team flags a continuation concern, fiduciary obligations compel intervention. The governance review becomes a UM action. Cadence is external and advisory. That structural separation is why the governing body has to be independent.
Two governance cycles with clinical review (34,500 members) plus independent validation on 30,734 patients in the NIH All of Us Research Program. 58-60% showed grounds for trajectory change when a qualified reviewer finally looked. 40-42% were confirmed clinically appropriate. In the national cohort, only 5% of flagged patients had sufficient monitoring documentation to confirm appropriateness at all. 50% of advisory signals persisted into Cycle 2. No override. No denial authority.
Conducted under a published standard (CGS v1.1). Certified by the only organization that issues continuation governance credentials.
1% of therapy cost for structured governance. The artifact is the return. Model your numbers →
CMS's BALANCE Model launches GLP-1 coverage for Medicaid mid-2026. Millions of new continuation members entering a system with no governance infrastructure. Star Ratings, Medicaid quality measures, IRA pricing accountability, state PBM reform. They all point at the same gap.
Continuation governance is moving toward mandatory. The question is whether you build it under pressure or present eight cycles of data when the requirement arrives.
Request a pilot briefing and we'll respond within one business day.
Your clients have no governance credential.
Their competitors won't either, unless you bring it.
Your clients are managing specialty continuation populations, GLP-1s, biologics, oncology, costing tens of thousands to hundreds of thousands per member per year, with no structured oversight. No one's stop-loss carrier has ever seen a governance credential at renewal. You change that.
No stop-loss carrier has ever seen a continuation governance credential at renewal. The Cadence Governance Certificate is the document your clients hold, and no one else's clients can. You bring a differentiation story built on governance evidence, not vendor promises. Your clients' competitors aren't having this conversation with their carriers yet.
ERISA requires prudent stewardship. Specialty continuation spend flowing with no governance touchpoint is an unmanaged exposure. The artifact is the documented answer to "what structured oversight did you exercise?": the question your client wants to answer before it gets asked.
"Your plan is spending $3M+ a year on specialty continuation therapies. Your PBM manages formulary. Your TPA processes claims. Nobody is governing whether those therapies are still working. There's a new service. One CSV, 90 days, $6/member/month, produces a governance artifact your board can hold and a credential your stop-loss carrier has never seen. Your competitors don't have it yet."
"Our client ran an independent governance cycle on their specialty book. Here's the certificate. Sixty percent of continuation members had a trajectory change identified by external PharmD reviewers. Forty percent were confirmed clinically appropriate. Every case is in the audit trail. You're looking at the only documented continuation oversight on your desk this quarter."
The underwriter's question has always been: "What oversight does this plan exercise on continuation spend?" Until now, the answer was silence or a PBM trend report. The Governance Certificate is a direct, documented answer, with metrics, methodology, and a documented audit trail attached.
This is not a savings guarantee. It is evidence of structured governance. The kind of documentation that changes how an underwriter models risk on a book they've never been able to see into before.
Claims history, network discounts, PBM contract, maybe a trend report. No governance record, no influence metric, no documented appropriateness findings. No credential.
A governance artifact with measured RIR, a 40% Documented Appropriateness Rate, case-level documentation, a configuration fingerprint, and a Governance Certificate the underwriter has never seen from any plan sponsor. That's a different renewal.
Introduce Cadence to your self-funded clients managing specialty populations. We handle the governance cycle. Your client provides a de-identified claims extract, we run 90 days of structured reviews, they hold the artifact and the certificate. Your role: the advisor who brought governance to the table.
One CSV. One cycle. One credential no one else has. The engagement creates a recurring touchpoint with your client and a measurable differentiator at every stop-loss renewal, without you building, staffing, or operating anything.
Stop-loss premiums on self-funded specialty populations typically run 8–12% of expected claims. For a 10,000-member employer with $7,200 ATC and ~50% continuation, that's roughly $36M in continuation exposure and $3–4M in annual stop-loss premium.
A 1–2% improvement in stop-loss renewal terms on a $3–4M premium = $30,000–$80,000 per year. The governance cycle costs roughly $180,000 at $6 PMPM for that population. If the certificate influences even a single renewal cycle, governance pays for itself, and the artifact, the RIR benchmark, and the DAR documentation are yours to keep.
These are directional estimates, not guarantees. Stop-loss pricing depends on carrier, specific loss history, and attachment points. The point: documented governance is a variable no carrier has seen from any employer, and variables that reduce uncertainty tend to reduce premiums.
PDF-ready summary for your next client meeting or carrier renewal conversation.
We'll send you a broker briefing with pricing, positioning, and the governance certificate preview.
The front door is managed. The back door is managed.
The hallway runs on autopilot.
Specialty pharmacy has two well-managed doors. Initiation: PA, step therapy, formulary design. The acute event: the surgery, the hospitalization, the adverse reaction. Billions flow through managed processes at both ends. Between them, a member continues on therapy for 12, 18, 30 months. The cost grows. No structured process exists to assess whether the therapy is still working.
This is not a failure of any existing tool. Claims analytics sees the spend. PA checks eligibility. UM intervenes on individuals. PBMs manage formulary. Each performs its function. The problem is structural: no instrument was designed to govern ongoing continuation at the population level. The intelligence exists in fragments across four systems. The assembly does not.
That is where Cadence operates. The hallway.
What happens when the governance stays on.
A single cycle is a diagnostic. You have seen what it produces: the 58–60% RIR, the 40% DAR, the artifact, the certificate. That baseline is where Sustain begins.
Sustain is what happens when the governance layer remains in place and each cycle builds on the last.
The first cycle uses population-level trigger thresholds from the CGS. The second starts from a calibrated baseline. Triggers refined to your population. A self-funded employer with a biologic cohort does not need the same sensitivity as a Medicaid plan managing GLP-1 continuation. After one cycle, the configuration fingerprint reflects your population, not a reference model.
The second cycle produced a 52% RIR and ~$7.1M in additional governance signal. Cumulative two-cycle value: $15.7M. The 8-point decline from Cycle 1 (60%) to Cycle 2 (52%) is the governance working. The easiest cases were influenced first. The remaining population is harder. That attenuation is a functioning system, not a declining one.
By cycle three, the intelligence is predictive: you know which members are most likely to benefit from reassessment before the cycle begins. And a plan whose RIR has declined to 15% by cycle five holds an 85% DAR. Documented confirmation that the vast majority of its continuation population is clinically appropriate. The governance found less to change because there was less to find.
After one cycle, your RIR is a number. After two, a trend. After three, a benchmark with a trajectory. As the Cadence client base grows, you compare your posture against anonymized cross-plan benchmarks. The standard that measures you is the same standard that measures everyone.
One cycle shows you the gap. Ongoing cycles govern it. The difference between a governance assessment and a governance layer is whether it stays on.
Three phases. One relationship.
A single 90-day cycle. The first artifact. The first measured RIR and DAR. The Governance Certificate for your next stop-loss renewal. Pricing: fixed PMPM or standalone Facilitated Assessment ($15–25K). The buyer's risk is one cycle. The output is permanent.
Continuous 90-day cycles. Triggers calibrated to your population. RIR benchmarked against your own prior data. GPR tracking tells you which signals persisted. Your stop-loss carrier sees a second certificate, then a third. Pricing: PMPM, scaled to cohort size. The governance layer is now permanent.
After two or more completed cycles with verified outcomes data, the economics can shift from fixed PMPM to performance-aligned pricing tied to documented governance value. The margin is not close. At reference parameters, the cycle needs to find trajectory changes in just 3.1% of reviewed cases to justify its cost. Two independent cohorts found 58–60%.
The phases are sequential but not mandatory. An organization can remain at Phase 1 indefinitely. Phase 2 is for organizations that want continuous oversight. Phase 3 is for mature engagements. Each phase earns the next. Nothing is assumed.
Why the layer scales.
No provider network to build. No technology integration required. No denial authority to trigger. Advisory-only by design. A CSV in, an artifact out, and the week after a signed agreement the cycle can begin. The governance layer is lightweight because it chose to inform rather than intervene. One PharmD governs a 25,000-member cohort per cycle. The marginal cost of the next 10,000 members is one additional part-time reviewer, not a new department.
An algorithm can flag a member. It cannot sign a governance determination. The signature is the product. Everything else is infrastructure that makes the signature possible.
What you hold after year one.
An organization that completes three governance cycles under the Cadence Governance Standard holds the following, none of which existed before the first cycle:
Configuration fingerprint, outcome distribution, measured RIR and DAR, immutable audit trail. Sealed. Versioned. Comparable across cycles.
Your stop-loss carrier has seen documented continuation oversight three consecutive quarters. No other plan sponsor they underwrite can produce this.
RIR trending cycle-over-cycle. GPR data. Calibrated triggers tuned to your population. The beginning of a predictive governance model.
Documented evidence of quarterly structured oversight under an external published standard. The answer to: what governance did we exercise over the $40M in specialty continuation spend?
The question is coming. ERISA requires prudent stewardship. The CAA of 2026 increases fiduciary scrutiny. BALANCE is expanding the continuation population. The organizations that can answer the governance question are the ones that started before it was asked.
Challenge us.
A question about the framework, a challenge to the thesis, or a conversation about your population. We built this because we think it matters. We're ready.
The architecture was never completed.
Cadence completes it.
© 2026 Cadence, LLC