Key Takeaways
- Credentials are permission to practice, not competence itself.
- Psychologists perform opening night daily but never practice.
- AI enables deliberate skill practice for the first time.
- Skills must be named, demonstrated, and refined continuously
- Adapt now or become link-senders in your own profession.
The Vanishing: When Rockstars Become Link-Senders
I need to tell you about people who no longer exist.
Not literally. They’re still breathing, still showing up to work. But their profession died while they were looking the other way, and I watched it happen.
In the 1980s and 90s in South Africa, psychometrists were rockstars. These were the people who sat across from you during assessments, who held the instruments of psychological measurement in their hands like sacred artifacts. They administered tests with precision. They understood the nuances of interpretation. They were integral to the diagnostic process. Parents requested them by name. Organizations built assessment centers around their expertise.

Then digitization arrived.
Not overnight. Not with fanfare. Just gradually, quietly, the way erosion works. Online assessments replaced paper protocols. Automated scoring systems handled calculations. Digital platforms managed scheduling and delivery. And these rockstars, these people who had built entire careers on their assessment expertise, found themselves doing something they never imagined: sending links via email. Tracking completion rates. Troubleshooting login issues.
They became glorified admin assistants in their own profession.
That was fifteen years ago. I remember thinking it was tragic. I remember feeling sympathy for them, the way you feel sympathy for any profession caught in technological disruption. I didn’t think it would happen to me.
I was wrong.
Because I’m standing here today to tell you that I am watching the same thing happen to psychology. Not fifteen years from now. Not in some distant future we can plan for. Right now. While we’re attending conferences and publishing papers and maintaining our credentials.
Our profession is flashing before our eyes, and most of us are too busy being psychologists to notice.
Let me be specific about what I mean. I’m an organizational psychologist. My training taught me to assess workplace dynamics, design interventions, facilitate organizational change. I learned frameworks. I earned credentials. I accumulated letters after my name that signal competence to the world.
And then one morning, I opened an AI tool and asked it to design an organizational intervention for a team conflict I was working on. It gave me a structured approach in forty-five seconds. Multiple options. Evidence-based rationale. Implementation timeline. Risk mitigation strategies.
It wasn’t perfect. But it was 70% there. Maybe 75%.
That’s when the floor dropped out.
Not because AI could do my job. But because I realized what was coming next. If an AI could get 75% of the way there on organizational intervention design, how long until it reached 85%? 90%? And at what percentage does my role transform from “organizational psychologist who designs interventions” to “person who sends AI-generated intervention plans and ensures people follow them”?
How long until I’m sending links?
This isn’t hypothetical anxiety. This is pattern recognition. I watched it happen to psychometrists. I’m watching it happen to customer service professionals, to junior analysts, to entry-level knowledge workers across industries. And the brutal truth is this: credentials don’t protect you when the tasks you were trained to perform become automatable.
Your PhD doesn’t matter if the AI can formulate cases.
Your certification doesn’t matter if the AI can design interventions.
Your years of experience don’t matter if you can’t articulate what you do that the machine cannot.
We need to talk about what does matter. Because there is an answer. It’s just not the one we were trained to expect.
The Credential Trap: What We Confused for Competence
Here’s what I started to notice as I dug into this question. There’s a fascinating assumption baked into how we think about professional development in psychology. We assume that competence is demonstrated through credentials and maintained through continued education units. We complete our training. We earn our licenses. We attend workshops. We accumulate CEUs. We assume this proves we’re competent.
But does it?
Think about what athletes do. They train. Not occasionally. Not annually at a conference. Daily. They break down complex performances into component skills. They practice those skills in isolation. They get immediate feedback. They adjust. They practice again. Musicians do the same thing. Performers rehearse. They run drills. They work with coaches who watch them execute and tell them precisely where they’re failing and how to improve.
Now think about what psychologists do.
We don’t train. We perform. Every single day of our careers is opening night. We walk into consulting rooms, into organizations, into intervention settings, and we execute live, in front of clients and stakeholders whose wellbeing depends on our competence. We might reflect afterward. We might discuss cases in supervision. But we rarely, if ever, practice our craft in low-stakes environments designed specifically to build skill.
A violinist would never approach their career this way. Imagine if musicians only played in front of paying audiences and never practiced scales. We’d call that malpractice. We’d call that irresponsible.
But this is exactly how knowledge workers operate. This is exactly how psychologists operate. We turned professional development into credential accumulation instead of skill refinement.

And now AI is revealing the cost of that choice.
Because here’s what’s becoming painfully clear: if you can’t name the specific skills that make you competent, if you can’t demonstrate those skills in observable artifacts, if you can’t practice those skills deliberately to get better, then you can’t compete with systems that can name tasks, demonstrate outputs, and improve iteratively.
Credentials are not skills. Credentials are permission to practice skills. And we’ve spent decades confusing the two.

The Observable Truth: Naming Skills We Can’t Name
Let me break this down. What are the actual skills that define psychological competence? Not the roles. Not the titles. The skills.

- Case formulation. The ability to take messy human complexity and create coherent diagnostic understanding. That’s a skill.
- Intervention design. The ability to match therapeutic approach to client need and context. That’s a skill.
- Therapeutic presence. The ability to create safety and rapport that enables change. That’s a skill.
- Diagnostic reasoning. The ability to navigate uncertainty and ambiguity to arrive at accurate clinical judgment. That’s a skill.
These are not abstractions. These are observable patterns of professional behavior that show up in the artifacts we create. Case conceptualizations. Treatment plans. Progress notes. Assessment reports. Supervision feedback. These documents reveal our thinking. They show whether we can formulate clearly, reason diagnostically, design appropriately.
And here’s what makes this urgent: AI can now read these artifacts, apply rubrics to them, and give you feedback on your competence. Not your credentials. Your competence.
The technology that disrupted psychometrists isn’t just automating tasks. It’s creating a practice infrastructure that never existed before.
The Practice Infrastructure: What AI Makes Possible
Let me show you what I mean. There’s a fascinating model emerging from tech world that has direct implications for psychology. It works like this.

First, you identify a skill that matters for your work. Let’s say diagnostic reasoning in depression cases.
Second, you define what good looks like. You sit down with colleagues whose judgment you trust and you ask them: when you say a depression case formulation is good, what specifically do you mean? Is it clarity of presentation? Integration of multiple data sources? Consideration of differential diagnoses? Explicit treatment rationale? You create a rubric. You make implicit expertise explicit.
Third, you gather examples. You pull three to five real case formulations. You annotate them. You mark what’s strong and what’s weak according to your rubric. You create a reference set.
Fourth, you give that rubric and those examples to an AI. You say: when I send you a new case formulation, score it according to this rubric. Quote the parts you’re reacting to. Explain why you gave each score. Suggest specific edits that would improve this dimension.
And then you practice. You take a new case. You write a formulation. You run it through your AI rubric. You compare your version to stronger versions. You notice what you missed. You do it again next week. And again the week after.
This is what deliberate practice looks like for knowledge work. This is what it means to train your craft instead of just performing it.
And psychology has never had this. We’ve never had a scalable way to practice clinical judgment, get structured feedback, track improvement over time. We relied on supervision, which is valuable but limited by supervisor availability and subjective. We relied on peer consultation, which is helpful but inconsistent. We relied on continuing education, which exposes us to ideas but doesn’t build skills.
AI changes this. Not because AI can replace psychological judgment. But because AI can serve as a practice partner that helps you deliberately refine the judgment you already have.
Let me be very clear about what I’m not saying. I’m not saying AI should make clinical decisions. I’m not saying we should trust AI’s judgment about human wellbeing. I’m not saying technology replaces the human elements that make psychology valuable.
What I’m saying is that we now have the ability to practice our craft the way athletes practice theirs. To break complex psychological competencies into component skills. To create feedback loops. To track growth. To move from credentialed permission to demonstrated competence.
And if we don’t seize this opportunity, someone else will. Tech companies are already building AI systems that formulate cases, design interventions, track outcomes. They’re not better than trained psychologists. Not yet. But they’re improving rapidly, and they have something we often don’t: clear rubrics for what good looks like, structured feedback mechanisms, and iterative refinement processes.
They’re treating psychological work as a set of skills to be practiced. And we’re still treating it as a credential to be maintained.
Who do you think wins that race?
Deliberate Practice in Action: What Training Could Look Like
Here’s what practicing psychological skills with AI could look like.
Imagine you’re training therapists in cognitive case formulation. Instead of just teaching Beck’s model in a workshop and hoping they apply it well, you create a practice loop. You define your rubric: clear identification of core beliefs, explicit links between thoughts and emotions, coherent developmental narrative, specific behavioral patterns. You gather examples of strong and weak formulations. You train an AI system on your rubric.
Now your trainees practice. They take case vignettes. They write formulations. They get immediate, structured feedback on what they did well and where they need to improve. They revise. They try again. They track their scores across multiple cases. They identify their blind spots. Do they consistently miss developmental factors? Do they struggle with linking cognition to behavior? The data tells them. The AI shows them. They practice deliberately on their weak spots.
This isn’t replacing supervision. This is creating the repetitions that make supervision more valuable. By the time they sit with a human supervisor, they’ve practiced the mechanics enough that supervision can focus on nuance, on context, on the things that require human wisdom.
Or imagine you’re developing competency in suicide risk assessment. This is high-stakes work. Making errors costs lives. And yet how do psychologists learn this? We read guidelines. We attend trainings. Maybe we observe experienced clinicians. Then we’re expected to execute accurately in real clinical situations where ambiguity is high and consequences are severe.
What if instead we practiced? What if we used AI to simulate assessment scenarios, practice our decision-making, get immediate feedback on whether we gathered the right information, weighted risk factors appropriately, developed adequate safety plans? What if we could make mistakes in simulation and learn from them before we’re sitting across from an actual suicidal client?
This isn’t science fiction. This is possible now. The technology exists. What’s missing is the professional infrastructure. What’s missing is our willingness to acknowledge that psychological skills are trainable competencies, not mystical gifts that emerge from credential accumulation.
And here’s why this matters urgently. As AI systems get better at performing psychological tasks, the differentiator won’t be whether you can do the task. It will be whether you can do it better than the machine, and whether you can prove it.
The psychologists who survive this transition won’t be the ones with the most impressive credentials. They’ll be the ones who can demonstrate competence in observable artifacts, who can articulate their judgment process clearly, who can show continuous skill refinement. They’ll be the ones who treated their psychological expertise as a craft to be practiced rather than a credential to be protected.

They’ll be the ones who didn’t wait for the profession to become admin work before they adapted.
The Irreplaceable Human: What Machines Cannot Touch
I know what some of you are thinking. This sounds mechanistic. This sounds like it reduces the art of psychology to rubrics and scores. This sounds like it misses what makes therapeutic work valuable: the human connection, the relationship, the ineffable quality of presence that creates healing.
And you’re right to worry about that. The durable elements of psychological practice, the parts AI genuinely cannot replicate, are precisely those high-trust, high-context, high-ambiguity interactions where human judgment matters most.

- Building therapeutic alliance. That’s not automatable.
- Navigating ethical complexity in real time. That’s not automatable.
- Holding space for another human’s suffering without fixing or fleeing. That’s not automatable.
- These are the skills that matter most. And these are precisely the skills we should be practicing most deliberately.
Because here’s the paradox: the more AI can do the mechanical tasks of psychology, the more essential it becomes that we excel at the deeply human ones. But we can’t excel at them if we’ve never practiced them deliberately. We can’t demonstrate competence in therapeutic presence if we can’t articulate what presence looks like, create rubrics for it, get feedback on it, track our growth in it.
Right now, we say things like “you either have it or you don’t” about therapeutic presence. We say “it comes with experience” about clinical intuition. We say “you’ll know it when you see it” about diagnostic wisdom.
These are admissions of defeat. These are ways of saying we don’t actually know how to teach the most important parts of our craft. And if we don’t know how to teach them, how can we claim to practice them deliberately? How can we claim to get better at them over time?
The practice revolution I’m describing isn’t about reducing psychology to algorithms. It’s about taking the craft of psychology seriously enough to train it like a craft. To name our skills explicitly. To create artifacts that reveal our competence. To build feedback systems that help us improve. To treat professional development as skill refinement rather than credential maintenance.
And to do this before our profession makes that choice for us.
The Cost of Comfort: What Psychometrists Teach Us
Let me bring this back to where we started. The psychometrists I knew didn’t fail because they lacked credentials. They had extensive training. They held professional certifications. They’d accumulated years of experience.

They failed because their role was defined by tasks rather than skills, and when technology automated the tasks, there was nothing left. They couldn’t articulate what they did that was irreplaceable. They couldn’t demonstrate competencies that machines couldn’t match. They couldn’t adapt because adaptation required acknowledging that credentials weren’t enough.
And by the time they realized this, the transformation was complete.
I don’t want this to be psychology’s story. I don’t want us to look back in ten years and realize we spent this crucial window protecting our credentials instead of developing our skills. I don’t want us to become the profession that got disrupted because we were too invested in who we were to become who we needed to be.
But that requires us to make uncomfortable admissions. We need to admit that we don’t actually practice our craft deliberately. We need to admit that we’ve confused credential accumulation with competency development. We need to admit that we can’t always articulate what makes us competent, which means we can’t teach it systematically or improve it deliberately.
These admissions are costly. They challenge our professional identity. They force us to acknowledge gaps in how we’ve approached development. They require us to learn new skills, adopt new practices, embrace new technologies we might find threatening.
But the cost of not making these admissions is higher. The cost is irrelevance. The cost is watching our professional roles hollow out while we insist that our credentials prove our value. The cost is becoming link-senders in our own discipline.
Here’s what I’m asking you to consider. What if you treated your psychological practice as a set of skills to be deliberately refined rather than credentials to be maintained? What if you created artifacts of your competence that revealed your thinking? What if you built rubrics for what good looks like in your areas of expertise? What if you used AI not to replace your judgment but to practice it more systematically?
What if you started training like an athlete instead of performing like a credential-holder?
This isn’t a comfortable shift. It requires acknowledging that experience alone doesn’t guarantee competence. It requires making implicit expertise explicit, which is hard work. It requires embracing technology as a practice partner, which feels threatening when we’re worried about replacement.
But discomfort isn’t the same as danger. Sometimes discomfort is the signal that we’re growing in necessary directions.
The psychometrists I knew were comfortable right up until their roles evaporated. They were credentialed, experienced professionals who assumed their expertise was self-evident and therefore protected. They were wrong.
We have the chance to be right. We have the chance to transform professional psychology from a credential-based guild to a competency-based discipline. We have the chance to practice our craft deliberately, demonstrate our skills observably, and prove our value continuously.
We have the chance to adapt before adaptation is forced upon us.
The Question That Defines You: Skills or Credentials?
So here’s my question for you. Not what credentials do you hold. Not how many years you’ve practiced. Not what letters come after your name.
What psychological skills do you possess that you could name explicitly, practice deliberately, demonstrate observably, and refine continuously?
Can you articulate them? Can you create artifacts that reveal them? Can you build rubrics that define them? Can you track your growth in them over time?
If you can’t answer these questions, you’re not alone. Most of us can’t. We were never trained to think this way. Our professional development systems weren’t built for this kind of transparency.
But that’s changing. Technology is forcing the change. And we can either resist it until our roles transform without us, or we can lead it by reimagining what professional competence actually means.
I’m choosing the second path. I’m learning to name my skills, practice them deliberately, and demonstrate them observably. I’m building rubrics for organizational psychological competence. I’m using AI as a practice partner. I’m treating my expertise as a craft to be refined rather than a credential to be protected.
It’s uncomfortable. It’s exposing. It reveals gaps I’d rather not acknowledge.
But it’s also liberating. Because for the first time in my career, I’m not just performing. I’m practicing. I’m not just accumulating credentials. I’m refining competencies. I’m not just hoping I’m good at what I do. I’m tracking whether I’m getting better.
And I’m not waiting for my profession to become admin work before I adapt.
I invite you to join me. Not because I have all the answers. Not because this path is easy. But because the alternative, the path of credential protection and professional comfort, leads to a future I’ve already witnessed.

A future where rockstars become link-senders while insisting they’re still practicing their craft.
We can do better. We have to do better.
The only question is whether we’ll choose to do it now, while we still have agency, or later, when the choice has been made for us.
I know which future I’m building toward. What about you?




