Don't Just Adapt. Define The Model.
A perspective from Yannick Bakker

I lost the lead with AI, and I teach it for a living

What happens when a legal professional's core principle meets the reality of agentic AI tools

Yannick Bakker · February 2026 · 7 min read

Key insight

We tell legal professionals to always stay in the lead when working with AI. Then I tried Claude Code and realised I was blindly installing programmes because a chatbot told me to.

The moment it stopped feeling comfortable

Here is something I tell every legal professional I train: you are the architect, AI is the tool. Stay in the lead. Check the output. Understand what is happening before you act on it. It is one of our core principles at The Legal Model, and I believe it completely.

So let me tell you about the evening I violated it.

I had been asked to design a legal intake gateway for a client's legal department. The kind of tool where business colleagues submit their requests through one entry point instead of the current chaos of emails, WhatsApp messages, Teams pings, and phone calls. I thought: why not use Claude CodeA command-line AI tool that writes and runs code to build a working prototype I could show the legal team and their IT colleagues?

To start, I had to open a command consoleText-only interface for giving your computer direct instructions. If you have never seen one, it looks like 1984. Black screen, blinking cursor, no buttons. I typed "claude" and pressed enter. And from that moment, I was following instructions I did not fully understand.

Install this. Run that command. Type "git add." Connect to GitHub. Set up Supabase. Link to Vercel. Within an hour, I had installed four programmes, created accounts on three platforms, and connected systems I could not explain to you if you asked me at dinner. Any one of these could have been malicious, and I would not have known.

Why this is not just my problem

In February 2025, Andrej Karpathy, one of the co-founders of OpenAI, coined the term vibe codingBuilding software by describing what you want, not writing code. He described a new way of working where you tell an AI what to build, accept the result, and move on without fully understanding the code it produced.1Andrej Karpathy, X/Twitter post, February 2025 "I just see things, say things, run things, and copy paste things," he wrote. "It mostly works."

Karpathy meant it as an observation, not a recommendation. But the practice has spread fast, well beyond software engineers. Designers, consultants, and now legal professionals are building functional tools with AI code generators, often without grasping the technical layers underneath.

Our experience across those implementations shows something Karpathy's framing misses: for legal professionals, the discomfort is not a bug. The instinct to pause and say "wait, what just happened?" is exactly the right instinct. The problem is that most people either suppress it because the results look impressive, or they abandon the tool entirely because the discomfort is too much.

Neither response is useful.

The principle is not broken; your picture of it is

Here is what I have learned, both from my own stumbling and from training legal professionals across Europe: "human in the lead" does not mean "human understands every technical detail." It never did.

When you review a contract drafted by a junior associate, you do not rewrite every clause from scratch. You check the structure, test the logic, flag what feels off, and verify the critical points. You exercise judgment without having typed every word yourself. The same principle applies when working with AI coding tools, but the emotional experience is completely different.

When a junior sends you a draft contract, you feel competent. When a command line asks you to type "git push," you feel like a fraud. The loss of control is not technical. It is psychological.

"A principle is not a rule. A rule tells you what to do and shuts down thinking. A principle makes you think harder."

Chris Kwiatkowski, co-founder and head of AI strategy, The Legal Model

My colleague Chris is right. The question was never "do I understand every line of code?" It is: do I know what I am building, why I am building it, and what to check before I trust it?

That reframe changes everything. I did not need to understand what gitVersion control software that tracks changes in code does at a technical level. I needed to pause and ask: what is this programme? What does it access on my system? Is there a risk I should know about? The tool to ask those questions was right in front of me. I just did not use it, because the momentum felt too good.

Three things that actually help

After going through this myself and watching dozens of legal professionals hit the same wall, here is what works in practice.

01

Start with play, not with production

I did not build the legal intake gateway first. I built a lineup manager for my Sunday football team. I built a scoring app for a weekend away with friends. By the time I sat down to prototype the legal tool, I understood the workflow, the platforms, and the failure modes, because I had already made every mistake in a context where nothing was at stake.

If you want to explore AI coding tools, pick a personal project first. Something where a failure costs you nothing but an evening.

02

Build the pause into your process

When an AI tool tells you to install something, that is your cue to open a second window and ask: what is this? What does it do? Are there known risks? Yes, it takes extra time for that one step. But it is the difference between being a user and being in the lead.

We teach a simple framework in our programmes: think, check, act. Think about what you need. Let AI help you plan or build. Check the output before you commit. Then act. Every step still requires your judgment. AI does not reduce the calories you burn; it changes where you burn them.

03

Separate the architect from the builder

As a legal professional with a UX design background, my value was never in writing code. It is in knowing what the legal department needs, how an intake flow should work, what questions to ask at each stage, and how to make the result usable for non-lawyers.

Claude Code builds a working prototype in twenty minutes. I would have spent days doing the same in FigmaA design tool for creating interface mockups. But the architecture, the decisions about what to build and why, those remain entirely mine. Your IT colleagues will handle security, compliance, and production-readiness. Your job is to show them what "good" looks like for the legal function.

Before: the old prototype workflow

Sketch on paper. Rebuild in Figma frame by frame. Create every button, every link, every screen manually. Share a static mockup. Days of work before anyone clicks anything.

After: the AI-assisted workflow

Describe the architecture and user flow. Let Claude Code build a working prototype. Alter, test, and refine in conversation. Share a clickable tool in twenty minutes. Your judgment shaped every decision; AI handled the construction.

Where this is heading

Six months ago, AI literacy meant understanding how large language models work so you could prompt them well and verify their output. That is still true. But AI tools are no longer just answering questions. They are executing multi-step tasks, installing software, connecting systems, and in some cases proposing their own next actions without being asked.

The principle of being in the lead has not become less important. It has become more important, and harder to practise. That is precisely why it matters.

If you are a legal professional who has felt that same flicker of discomfort, that moment where you thought "I am not sure I am in control here anymore," consider this: that feeling is not a weakness. It is your professional instinct doing exactly what it should.

The question is what you do next. Do you stop and check, or do you just press enter?

Ready to put your team in the lead with AI?

We help legal departments build the skills and frameworks to use AI confidently, not blindly.

Book your free strategy call →

About The Legal Model

Maaike Roet, co-founder and CEO

Maaike leads The Legal Model's strategy and client relationships. She connects legal leadership with practical AI transformation, building programmes that move teams from overwhelmed to operational.

Yannick Bakker, co-founder and legal futurist

Yannick is a commercial legal counsel with 12+ years of experience at organisations including VodafoneZiggo and Vattenfall. He combines legal expertise with UX design thinking to make legal services accessible and human-centred.

Chris Kwiatkowski, co-founder and head of AI strategy

Chris brings a background in software, executive coaching, and AI implementation. He designs AI transformation programmes that bridge the gap between technical capability and organisational readiness.

Maaike, Chris & Yannick
Founders, The Legal Model

Sources

  1. Andrej Karpathy, X/Twitter post on "vibe coding," February 2025