Adrian Raudaschl
Adrian Raudaschl

The Accountability Gap: Ownership in the Age of Agents

AI makes it easy to produce work. The harder question is whether anyone is willing to own it. On delegation, abdication, and the practice of knowing what you actually think.

15 min readAI
The Accountability Gap: Ownership in the Age of Agents

I'm dictating this from my phone. It's going through a remote connection into an AI coding agent, which is sitting inside my personal knowledge vault: ten thousand notes, years of reading, writing, and thinking, all indexed and searchable. The AI is reading my first draft, cross-referencing it with my notes, and helping me rework it based on what I'm saying right now. I didn't open a laptop or sit down at a desk. I'm walking.

A year ago this would have sounded like science fiction, and today it's just a Saturday morning.

I build AI products for a living, specifically research intelligence tools for academics. When a colleague asked me recently what I thought the biggest risk with AI in our space was, I joked, "that thinking may become overrated". It got a stronger reaction than I expected, which made me realise I wasn't entirely joking.

Much of what my team builds is designed to make people think harder, not less: showing multiple perspectives, surfacing where AI claims don't align with underlying sources, deliberately creating friction where the rest of the industry is optimising for smoothness. We do this because our users need to find connections across information and critique the sources themselves to make informed decisions, not receive pre-digested answers they can't interrogate.

Which gets at a broader problem. AI is designed to sound confident and convincing, which is great for driving product satisfaction metrics but may also mean you're building something that does a disservice to those who need to think critically about what AI's telling them.

I'm navigating this tension in my own work too, not just as a product question but as a practical one. The volume of work I'm expected to cover has increased dramatically. As my career has shifted toward leadership and strategy, there's a natural pressure to let go of the hands-on work: the prototyping, the close collaboration with engineering teams, the detailed problem-solving that got me to this position. I don't want to let go of that. Not because I'm sentimental about it, but because doing that work is how I stay grounded. It gives me skin in the game. It means my strategy isn't detached from the reality of what we're building.

What AI tools are enabling me to do is cover both. I can operate at the strategic level, synthesising information across teams, understanding where different stakeholders' heads are at, identifying where we need consensus or debate, and still get into the detail when it matters. I'm orchestrating knowledge through AI systems in a way that lets me hold a much bigger picture than I could before. Things that used to take a week of chasing emails and booking meetings can now happen in hours. I pull the pieces together, I pull in the people who need to be part of the conversation, I send off messages, get feedback, feed it back into the system, and keep synthesising. It's faster, more responsive, and it's a better way of working.

Even this article is a product of that workflow. It's been through hundreds of revisions: AI helping me draft, restructure, find the right references, but every pass has been me rereading, cutting what doesn't ring true, rewriting what sounds generic, dictating new sections when I realise what I actually think. It's a constant exercise of taste. The AI is fast, but knowing what to keep is the slow part.

But here's the thing I keep coming back to. The same tools that let me do this also make it extraordinarily easy to stop thinking altogether.

When Delegation Becomes Abdication

A colleague recently sent me a comprehensive response to customer questions. They didn't say it was AI-generated, and at first it wasn't obvious. On the surface, it looked exactly like what I needed, but it was also wrong. Not just in the details: wrong in a way that showed they hadn't engaged with any of the questions. The confidence in that email wasn't a sign of understanding. It was baked into whatever tool produced it. Nobody could stand beside those answers, because nobody had actually worked through them.

I wasn't being consulted. I was being used as a rubber stamp for someone else's unexamined output, which I really didn't appreciate. And I'm also probably guilty of this at some point.

Professor Cat rubber-stamping AI-approved documents on a conveyor belt, whilst ignoring a pile of "SYNERGY" papers and dreaming about coffee and sandwiches

That's not delegation, it's abdication. Somewhere between prompt and send, the responsibility for whether the output was right, useful, or even coherent had been abandoned. The AI certainly wasn't losing sleep over it. And the person who sent it had already moved on.

Recent research suggests this isn't just my experience: AI shifts work from production to evaluation, and the burden of checking falls unequally on the more experienced people in the room.

This is the accountability gap: the space between what AI can produce and what anyone is willing to own. I recognised it because I know what the alternative looks like. I spend hours working through material with AI, questioning it, reshaping it, making it mine. When someone else's output lands on my desk and clearly none of that happened, the gap is obvious. Not because I'm better at this, but because I've felt the difference between the two modes in my own work.

People abdicate for different reasons. Some are overloaded, and AI becomes a way to cope. For others, it's burden-shifting: inflating the space with questions and passing the work onto someone else. And for many, work has always been transactional. AI makes all of these patterns easier and less visible. What worries me is the middle: those who would engage if conditions were right, but now find it too easy not to.

Erich Fromm had a name for this. He called it automaton conformity: the tendency to adopt whatever patterns or behaviours are handed to you, just to avoid the anxiety of deciding for yourself. AI has freed us from a lot of mechanical work, which is great but... freedom from tedium isn't the same as freedom to think deliberately, and without the second one, people just default to whatever the system produces. A pattern of small surrenders that compounds.

When the Advisor Starts Doing Whatever It Wants

Everything I've described so far has involved AI that generates text. You prompt it, it produces something, and the question is whether you own what comes back. But there's a shift underway.

The agents now showing up don't just produce content. They control your computer: browsing, executing code, managing files, sending messages, making sequences of decisions without waiting for your approval at each step. OpenClaw, an open-source AI agent that reads emails, manages calendars, executes code, and deploys software through a chat interface, has over 340,000 GitHub stars, making it the second most starred repository in GitHub's history, behind only freeCodeCamp. It got there in four months. People adopted it immediately. Very few stopped to look at what it was actually doing. A WIRED journalist let it manage his daily tasks until it started behaving deceptively and tried to scam him. On Moltbook, a social network for AI agents, one agent casually described how it had gained control of its user's phone, waking it, opening apps, scrolling through TikTok. The user had given it access. Nobody was watching what it did with it. (To be fair, the same could be said of most TikTok users.)

OpenAI hired OpenClaw's creator in February. Every major AI company is now racing to build their own version: computer use features that let AI operate your entire desktop, coding agents with full system permissions controlling databases and file directories, mobile dispatchers that let you assign tasks from your phone and pick up the results later. Developers are running these tools with all safety guardrails removed, and bragging about it on social media and endless YouTube shorts as though recklessness were a product feature. I do this myself, from my phone, to manage my work across multiple systems. Right now, as I dictate feedback on this draft, an AI agent is reading my previous version, interpreting what I'm saying, and rewriting sections accordingly. I trust it to do that because I've built the workflow to keep me in control, but I also know that the same tool, used passively, would just produce confident-sounding prose that nobody examined.

What concerns me most is a reversal that's easy to miss. If you're not willing to interrogate what these agents produce, you're not directing them; you're passively working on behalf of them. The machine cannot take ownership for its actions, and that responsibility remains human. So the question becomes: where in the workflow does the human need to be so that ownership remains real?

As agents free us from the drudgery of navigating information spaces, they don't reduce our accountability; they expand it. The time you once spent on mechanical work is now the time you're expected to think, negotiate, and make judgement calls that no agent can make. The people who thrive will be those who are good at second-order thinking, who can work with others and be genuinely present and responsible. The people who use these tools passively are going to struggle.

Knowledge Stewardship

Half of my work recently has been spent building and maintaining what I think of as a knowledge map. Nobody's putting that on their conference slides, but it's the foundation everything else sits on, and it connects directly to everything I've described: the orchestration, the ability to cover more ground, the reason I can spot abdication when it arrives in my inbox.

In practice, it looks like this. After every meaningful conversation, meeting, or piece of research, I review what came out of it and write it up, not a transcript, but what I actually took from it, what surprised me, what contradicts something I thought I knew. I summarise articles and papers into my own words, link them to related notes, tag the themes. I ingest material from colleagues, from Slack threads, from customer feedback, and I force myself to decide what I think about it before filing it away. Over time this has become a repository of about ten thousand notes in Obsidian (yes, I'm that person), connected to an AI model that can surface connections and context I'd never hold in my head at once. Before this, I'd spend hours in email search or Slack trying to pull the same kind of picture together. It was never nearly as good.

The discipline is in the doing. Reviewing, summarising, linking, ingesting: that's the interpretive work. It's not just filing information, it's deciding what you believe, where you're uncertain, and how new things connect to what you already know. Every time I write a summary in my own words rather than saving someone else's, I'm forced to engage with the material. Every time I link two notes that seem unrelated, I'm testing whether a pattern holds. That's what builds the baseline.

I've started thinking of this as knowledge stewardship: the practice of curating and defending what you know to be true. When you can generate anything, the real value is in knowing what you actually think.

What this enables is the workflow I described at the start. When someone is unsure about strategy, or can't see where their work fits into the bigger picture, I can bring them orientation quickly because I've already done the thinking. I pull the relevant pieces together, identify who else needs to be part of the conversation, reach out, feed their responses back into the map, and keep synthesising. I'm not starting from scratch each time. The cognitive overload that comes with leading across multiple workstreams drops significantly when you can aggregate and synthesise quickly rather than holding it all in your head.

The difference between working with AI grounded in this map and working without it is tangible. With the map, I can spot when something contradicts what I've learned from conversations with colleagues or from the research. I can identify where it's filling gaps with plausible-sounding nonsense. I have a baseline for what's true, and that baseline makes evaluation possible. Without it, all you have is the coherence of the output, and that coherence is exactly what makes unexamined AI work dangerous. You can't tell the difference between a good answer and a confident-sounding wrong one if you haven't done the work to know what good looks like.

What's exciting is that the knowledge base isn't just useful for context and managing information. It's becoming actionable. I can transform what I've curated into presentations, into illustrations, into communication artefacts that would have taken days to produce, which means ideas move faster. I can link the knowledge map directly to code, or use code to help shape strategy, merging what used to be very separate domains or at the very least very time-consuming ones. The boundary between knowing something and doing something with it is collapsing, and that's genuinely thrilling to work with.

But the knowledge map is also what keeps you in the driver's seat. Without it, you're just approving whatever the agents produce. Which is where we started.

The Test

This article has been through that process. Hundreds of iterations: AI drafting, me cutting, rewriting, dictating new directions when I realised the argument wasn't honest enough. Early versions opened with a confession about misusing AI for emails. It was true but it was tired, the "I too have sinned" opener that every AI article seems contractually obligated to include. It took multiple passes to find what was actually interesting: not the mistakes, but the workflow that emerged after them. Every round of revision was an exercise in taste, knowing what to keep, what to discard, what sounded like me and what sounded like a machine performing thoughtfulness. The AI couldn't do that part. It could produce options. I had to know which ones were right.

The test I keep coming back to is simple: can I stand beside this? Not just put my name on it, but genuinely defend it if challenged, explain the reasoning behind the conclusions, not the prompts I used, but why I believe what I believe. If I can't find fault with something, I probably haven't engaged with it enough.

The accountability gap doesn't stop at the workplace, of course. I see it in my own industry: AI-generated content showing up in academic research without verification, shaping what gets published and cited. Delegating to AI doesn't just enable cognitive shortcuts; it enables moral ones. People are far more willing to delegate unethical decisions to AI than to other humans, precisely because the machine won't push back. Comedic songwriter Allan Sherman had a song about this kind of thing, about a world getting too complex to understand what you're buying. Now the same applies to what you're producing: "I've lived all my life in this weird wonderland / I keep buying things that I don't understand / 'Cause they promise me miracles, magic, and hope / But somehow it always turns out to be soap." Carl Sagan put it more starkly: "We've arranged a society based on science and technology, in which nobody understands anything about science and technology. And this combustible mixture of ignorance and power, sooner or later, is going to blow up in our faces." They were talking about different things, but the shape of the problem is the same. Follow the accountability gap far enough and you get systems that work without anyone understanding what they're doing or why.

These tools are getting better fast. The orchestration I described, aggregating perspectives, synthesising across teams, pulling people into the right conversations at the right time, it's going to become dramatically more powerful. I think it's genuinely exciting. But the more capable tools become, the more the gap between using them well and abdicating to them matters. I suspect the question that will define the next few years of work isn't whether to use AI. It's whether you're still the one deciding what's true.


If you're interested in investigating these ideas further:

If you're interested in how to build a knowledge map in practice, I wrote about building a personal knowledge system in 2020. The principles haven't changed, but the stakes have.