I was scrolling through job listings the other day and noticed something had quietly shifted. AI fluency wasn't tucked in the "nice to haves" section. It was right at the top, listed the same way companies used to list Microsoft Office. Baseline. Expected. Non-negotiable.
We've crossed a line. The question isn't "should I learn AI?" anymore. It's "how far behind am I if I haven't started?"
But here's the part nobody's saying out loud: having a ChatGPT tab open doesn't make you fluent. It means you've visited. The people pulling ahead aren't just using AI — they're being transformed by it. There's a big difference.
The landscape is splitting three ways
Some companies have gone all-in. Shopify's CEO told his team in 2025 that they need to prove AI can't do a job before asking to hire for someone. Zapier has baked AI fluency into how they work and who they bring on. Google and Meta now track AI-assisted output in performance reviews. For these organizations, AI is table stakes — not a specialty.
Others are pumping the brakes. JPMorgan Chase, Goldman Sachs, Apple, Citigroup — all have restricted or outright banned generative AI tools for employees. And honestly, their reasons are legitimate: data leakage, IP risk, compliance exposure. 27% of organizations globally have gone this route. The problem isn't the caution — it's that restriction without a roadmap is just a delay. Employees are figuring it out on personal accounts anyway, outside any organizational visibility.
Then there's the group I find most interesting: state and local government. They're willing but under-resourced. 43% of public sector employees used AI regularly in 2025, up from 17% in 2023. Long Beach, California was facing a 35% vacancy rate in some departments — it was taking 7 to 9 months to fill a single job — so they brought in an AI hiring tool just to keep up. Texas built a grassroots network where 700+ government employees started sharing AI templates and workflows with each other because no formal training existed. These aren't tech companies. They don't have innovation budgets. They're making it work anyway.
What the research is actually finding
Anthropic published a labor market study recently that measured real outcomes, not just theoretical risk. The finding: no mass unemployment wave for AI-exposed workers since late 2022. The white-collar collapse people predicted hasn't happened.
But there's a quieter signal buried in the data. Hiring of workers aged 22 to 25 into AI-exposed roles dropped around 14% after ChatGPT launched. AI isn't wiping out mid-career professionals — it's shrinking the bottom of the career ladder. The entry-level roles that used to be how you learned an industry? A lot of those are quietly going away.
The World Economic Forum projects a net gain of 78 million jobs globally by 2030. PwC found a 56% wage premium for workers with real AI skills. The disruption isn't a cliff. But it's not nothing either — and where you land depends a lot on what you're building right now.
There's a real difference between using AI and being transformed by it
I think about it like this. Asking AI to rewrite an email is like using a calculator to check addition. Useful. Fine. But that's not what's creating the 56% wage premium.
The people pulling ahead are redesigning how they work from the ground up. One person doing what used to take a team because they've built AI into their entire workflow. Shifting from doing the task to directing and judging the output. Using AI to do things that flat out weren't possible before — not just faster versions of the same things.
That's the difference between a tourist and someone who's actually moved in. Most people are still tourists.
I built a quick assessment to help you figure out exactly where you fall — Tourist, Practitioner, or Transformational — with specific next steps for your level. It takes about 3 minutes: Take the AI Fluency Assessment →
The concept I keep seeing validated in research is the "Digital Centaur" — human and AI working as one unit. Harvard and the OECD both land on the same conclusion: the highest performers aren't the ones using AI the most. They're the ones who know when to lean on it, when to push back on it, and when to override it entirely. The most in-demand skills in 2026 are still human: critical thinking, judgment, creativity. AI amplifies those. It doesn't replace them.
Where to actually start
Anthropic put together a practical framework — Delegate, Describe, Discern, Diligence — that I think is a solid foundation for anyone not coming from a technical background:
Start by figuring out what's worth handing off entirely. Then learn to communicate with precision, because vague prompts produce vague thinking. Build the habit of evaluating outputs critically — AI gets things wrong confidently and often. And understand the guardrails: what data shouldn't go in, what decisions shouldn't be delegated.
But treat that as a floor, not a ceiling. The real goal is using AI to do things you couldn't do before. Build workflows. Combine tools. Ask questions at a scale that wasn't previously possible. That's when it stops being a productivity tweak and starts being an actual shift in what you're capable of.
So here's the real question: are you the brake on the wheel — or is your organization?
Because one of the two is slowing you down. And in my experience, it's usually both.
Not sure which one? Take the assessment and find out where you actually stand: AI Fluency Assessment