By Jason Brumback, CEO of Navix Health
A few weeks ago, an AI founder named Matt Shumer published a piece called “I Need You to Hear This” about what’s actually happening with artificial intelligence right now. It went viral. If you haven’t read it, you should. But here’s the problem: it was written for a general audience. It talks about lawyers and financial analysts and software engineers. It doesn’t talk about the person running a 60-bed residential treatment center who’s losing $40,000 a month to claim denials. It doesn’t talk about the clinical director whose therapists are spending half their day on documentation instead of patients. It doesn’t talk about you.
So I’m going to talk about you.
I’ve had the same conversation about forty times in the last two months. It usually starts with a facility owner or clinical director asking me some version of “so what’s the deal with AI in healthcare?” And I give them the polite answer. The diplomatic answer. The one that sounds measured and reasonable and lets everyone go back to their day feeling like they’ve got plenty of time.
I’m done giving that answer. The polite version is a lie by omission, and the people I work with deserve better than that.
I’ve spent six years building technology for behavioral health. Addiction treatment, mental health, eating disorder programs. I know this industry’s operations inside out, from intake to discharge to the denial letter that shows up eight weeks later. And I’m telling you that the ground underneath this industry is shifting faster than almost anyone in it realizes.
This isn’t a prediction about what might happen. I’m describing what’s already happening inside my own company, right now, today.
What I’m seeing from the inside
For years, AI improved at a pace you could absorb. A new capability here, an interesting demo there, nothing that forced you to rethink how you actually operated. That’s over.
New training techniques broke something loose in 2025. The rate of improvement accelerated. Then it accelerated again. Each new generation of AI arrived sooner than the last, and the jump between generations got wider instead of narrower. What used to feel like steady progress started feeling like something with momentum behind it, something that wasn’t going to slow down just because the rest of us needed time to adjust.
I’ll give you one concrete example from my own work. A few weeks ago, I described a complex feature I wanted built for our EMR platform. Business logic, user requirements, constraints. All in plain English. No technical specification. I gave the AI the problem and left my desk.
When I came back hours later, it hadn’t just written the code. It had tested the feature itself, navigating the application the way a real user would, identifying problems, fixing them, iterating until it was satisfied with the result. It flagged me only after it had decided the work met its own quality bar.
I have a large development team. That feature would have been a full sprint. The AI handled it in a single session, and the output was better than what I’d have gotten from most junior engineers.
That’s not the part that shook me. The part that shook me was the decision-making. These latest models don’t just follow instructions. They make choices that reflect something uncomfortably close to professional judgment. The ability to look at a set of options and know which one is right, not just which one is technically valid. If you’d told me a year ago that I’d be describing AI that way, I’d have thought you were exaggerating.
Why behavioral health is more exposed than most people realize
The AI labs started with code because it was strategically self-reinforcing. AI that writes better code builds better AI. That flywheel is now spinning fast enough that they’ve moved on to everything else.
Here’s the thing I need facility operators and clinical leaders to sit with: the daily operations of a behavioral health organization are almost entirely composed of the kind of work that AI has gotten devastatingly good at. I’m talking about cognitive, screen-based, read-analyze-write-decide work.
Think about what your staff actually does for eight hours a day. Progress notes. Treatment plan updates. Biopsychosocial assessments. Utilization review packets. Benefits verification. Authorization requests. Denial appeals. Chart audits. Compliance documentation. Every single one of those tasks is fundamentally about reading information, analyzing it against a set of criteria, making a judgment call, and producing a written output.
That is precisely the category of work where AI capabilities have exploded in the last six months.
I’m not going to walk through each of those workflows and explain exactly what AI can do with them, because frankly, that’s the product strategy I’ve spent years building and I’m not handing it to competitors in a newsletter. But I will tell you this: the capabilities that exist today are making me rethink fundamental assumptions about how a behavioral health facility should be staffed and operated. Not assumptions about five years from now. Assumptions about right now.
The one area where being specific helps everyone
There’s one problem so universal in this industry that talking about it openly doesn’t give anything away, and staying quiet about it would be irresponsible.
Facilities don’t lose revenue because their clinical work is bad. They lose revenue because the documentation doesn’t tell the right story in the right format for the right audience. A payer doesn’t deny a claim because the patient didn’t need treatment. They deny it because the paperwork didn’t prove medical necessity in exactly the language and structure their utilization review criteria demand.
This is a translation problem. Clinical reality into payer-readable documentation. And translation is one of the things AI does best. The ability to read a clinical record, understand what a specific payer’s guidelines require, identify gaps between what’s documented and what needs to be documented, and close those gaps. That capability is real, it’s available now, and it’s not theoretical.
If your facility is losing double digits on denials and your strategy is hiring another biller or sending your team to documentation training, you’re solving a 2026 problem with a 2016 approach. The facilities that put AI between the clinical record and the payer are going to have a structural financial advantage that manually-operated shops cannot match on effort alone.
“We tried AI at our facility. It wasn’t ready.”
This is the most dangerous sentence in behavioral health right now.
If you tested an AI tool in 2023 or early 2024 and found it unreliable, your conclusion was correct for that moment. But that moment is gone. The distance between those early models and what’s available today isn’t incremental improvement. It’s a generational leap. The analogy isn’t upgrading from a Honda to a Lexus. It’s going from a horse to a jet engine.
The mistake I see facility leaders making is forming a permanent opinion based on a temporary experience. They tried it, it fell short, and they filed it away as “not ready for healthcare.” Meanwhile, the technology kept compounding. Every few months, a new generation arrives that makes the previous one look primitive. That’s not hype. That’s what I watch happen in my own product development cycle, over and over.
The other thing keeping people behind is that most of them used free tools. The free tier of any AI product is a year or more behind what paying users access. Drawing conclusions about AI from the free version is like evaluating a Tesla by test-driving a golf cart. The gap is that large.
The math that should keep you up at night
Forget the hype cycle. Look at the data.
Researchers have been tracking AI capability by measuring the complexity of real-world tasks, gauged by how long they’d take a human expert, that a model can complete independently from start to finish. Twelve months ago, AI could reliably handle roughly ten minutes of expert-level work without supervision. Today, that number is approaching five hours. The rate of improvement is doubling on a cycle of four to seven months.
Run that forward. Twelve months from now, you’re looking at AI that completes day-long tasks on its own. Two years out, week-long projects. The CEO of one of the major AI labs has said publicly that he expects 50% of entry-level white-collar jobs to be displaced within five years, and people inside the industry think that’s a generous timeline.
Now inventory the roles at your facility. How many are fundamentally entry-level, white-collar, screen-based work? Benefits verification. Scheduling coordination. Data entry. Billing follow-up. Basic chart prep. Referral management. Preliminary compliance checks.
I’m not saying those roles disappear in six months. I’m saying the cost equation for those roles is about to shift in a direction that nobody in behavioral health is planning for. And the facilities that see it coming will have fundamentally different economics than the ones that don’t.
What I’d do if I were you
Stop treating AI as something to evaluate next year. Start treating it as an operational priority this quarter.
Get the best AI tools available and push them into your actual work, not as a curiosity, but as a serious test of what’s possible. Take your most painful, time-consuming administrative process and see what current models can do with it. Use paid versions. Use the most capable models. If you’re still forming opinions based on what free tools could do a year ago, you are operating on expired intelligence.
Audit your technology stack with brutal honesty. Is your current EMR designed to let you leverage AI, or is it a walled garden that traps your data in formats nothing else can touch? If your vendor isn’t already delivering AI capabilities, ask yourself how long you’re willing to wait. In this environment, “behind” doesn’t mean a version behind. It means a generation behind. And generations are arriving every few months.
Get strategic about your team. I’m not saying freeze hiring. I’m saying think hard about where human value is genuinely irreplaceable, relationships, clinical presence, licensed accountability, the judgment to oversee AI-assisted workflows, and invest there. For roles that are primarily administrative and screen-based, understand that the economics are shifting and plan accordingly. Not with panic. With foresight.
Build financial margin. Cut unnecessary fixed costs. Be cautious about commitments that assume your current revenue model is permanent. Give yourself room to move if things accelerate faster than your planning cycle. Based on what I’ve seen in just the last 90 days, they probably will.
What this actually makes possible
I’ve spent most of this piece creating urgency because I think the urgency is real. But I’d be telling you an incomplete story if I stopped there.
Behavioral health is an industry where clinicians are crushed under paperwork. Where good facilities bleed revenue from administrative complexity. Where demand for services massively outpaces the supply of qualified providers. Where burnout is driving talented clinicians out of the field at exactly the moment we need more of them.
AI doesn’t replace the therapeutic relationship. It doesn’t replicate clinical instinct built over years of practice. It doesn’t substitute for the human capacity that makes someone sit across from a patient in crisis and know exactly what to say. What it does, what it’s already starting to do, is remove the administrative burden that has been suffocating the people who do that work.
I can see a version of behavioral healthcare where clinicians give the vast majority of their working hours to patients instead of screens. Where facilities capture the revenue they’ve earned instead of hemorrhaging it to documentation gaps. Where compliance runs continuously in the background rather than erupting in a pre-audit scramble. Where the total cost of delivering care drops far enough that treatment reaches people it currently can’t.
That version is not theoretical. The technology needed to build it exists today. The question is who moves first and who gets left behind.
I’ve never been more convinced that the next 18 months will reshape behavioral health more than the last two decades. The people who engage now, who experiment without waiting for permission, who refuse to let skepticism or comfort hold them on the sidelines, are the ones who will define what this industry looks like on the other side.
The rest will be adapting to a world someone else built.
I know which side I’m on. You need to pick yours.