In Part 1, I told you about a sunny afternoon at a Stockholm café where I stopped reading about AI and started building with it. A reading app for my kids, vibe-coded in a few hours. It felt like discovering a superpower.
That afternoon created a hunger. Once you know you can move fast, you want to move faster.
I didn’t just stay on the bike. I built an engine, strapped it on, and floored it.
Building at the Edge
I’ve always believed that you can’t help others succeed with technology you haven’t wrestled with yourself. At work, I spend my days helping partners and customers navigate cloud-native tech and AI. But having real conversations — the kind where you can say “yeah, I hit that exact problem, here’s what I learned” — requires getting your hands dirty. You need to know where the bodies are buried.
So I started building. Not toy projects. Real systems that solved real problems.
First came a personal AI assistant. Not a chatbot — an assistant. One that could actually help me manage my day, not just answer trivia. I learned quickly that building a chatbot that doesn’t suck is incredibly hard. And the difficulty has almost nothing to do with the model. It’s about the experience. The interaction design. The moment when a user thinks “this actually understands what I need” versus “this is just autocomplete with a personality.”
Then I gathered every spare server I had around the house and stood up a real open-source Kubernetes cluster. At work, I help customers adopt cloud-native infrastructure every day, and there’s a severe skill gap between the people who know Kubernetes and the people who don’t. Managing Kubernetes requires an extensive skill set, and not every organization has a large platform operations team. So I wanted to answer a real question: how easy can Kubernetes become if you have agents managing the cluster for you? Can they accelerate deploying containerized applications and handle the day-to-day operations? It turns out agents are remarkably good at wrangling kubectl and writing YAML files.
Then came the agent manager and API gateway. How do you make agents work as a team? How do you implement agent-to-agent communication in a way where they actually collaborate instead of trampling all over each other? Agent orchestration is one of those problems every organization wants to solve, but I’ve yet to see a system that’s truly cracked it. I wanted to understand why, so I built one — not in theory, not from a whiteboard, but for real.
At first I was just intrigued by the challenge. Watching multiple agents choreograph themselves across well-defined tasks while I orchestrated that choreography — there’s something deeply satisfying about that. Maybe I’m a bit of a goofball, but I’ve always liked bringing chaos into order. I’ve always seen myself as, let’s say, “aesthetically challenged” — front-end was never really for me. I like the backend. I like the terminal. I used to build orchestration systems for document processing and information management, tied to specific business processes. This was the same muscle, just with agents instead of APIs.
But the more I worked with it, the more I realized this wasn’t just exploration anymore. I needed these tools — for myself, for my customers, for my partners. So I started pivoting toward a system that could help the manager (that’s me) run without having to check everything being done. Think of it like a Lead Developer who doesn’t read every line of code. Instead, you sit down with your developers and ask them questions about what’s been implemented. Based on the answers, you can deduce whether this is going to work or not — because you’ve always had testers writing unit tests, integration tests, and end-to-end tests. Those aren’t going away. They need to be part of your orchestration systems too. Combine the two — structured review and automated testing — and you don’t need to read all the code. Just the segments that matter. As data engineers like to say, “talk to your data.” I say: talk to your code.
With tools like Claude Code, Gemini CLI, and my years of enterprise architecture experience, I was compressing timelines that used to stretch across months into days. I was building things I genuinely didn’t think were possible six months earlier. It felt like I had found a cheat code.
There was just one problem: I forgot to build the brakes.
The Speed Trap
Last week I co-hosted a seminar at the Google office together with Fredrik Malmsten and Katia Rigbrandt from Knowit Insight. The topic: building human capability for agentic AI. A room full of leaders, all grappling with the same question: what does it actually take for AI to create real business value? The conversation kept circling back to one theme: AI is a leadership question, not a technology question. The real value emerges at the interface between human and agent — and understanding how we as leaders actually steer that capability is the next challenge.
I went home that evening feeling energized. And then, somewhere between closing my laptop and not closing my laptop, I realized I should probably listen to my own advice.
I’d recently come across a study from UC Berkeley, covered in both Harvard Business Review and TechCrunch. The finding was counterintuitive: AI doesn’t reduce work. It intensifies it. And the people most at risk? The ones who embrace it the most.
Reading those articles was uncomfortably familiar.
The researchers identified three mechanisms, and I recognized all of them in myself:
Task expansion. When AI makes complex things feel accessible, you start doing work you never would have attempted before. “I can build a cloud platform? Cool, let me also build an agent manager. And an API gateway. And a CLI tool. And while I’m at it, why not a personal assistant?” Each project was justified on its own. Together, they were a recipe for running hot.
Blurred boundaries. The “ease” of prompting means there’s no natural stopping point. I’d find myself sending “one last command” before bed. Reviewing agent output during lunch. The work became ambient — always within reach, always tempting. When starting a task takes thirty seconds, the gap between “I’m done for the day” and “actually, let me just…” disappears.
The “more” paradox. AI frees up time — but that time rarely turns into rest. It turns into the next project. The freed-up hours get quietly reinvested into more scope, more output.
To be clear: this isn’t a story about burnout. I wasn’t crashing. I genuinely believed I could do it all — and the tools made that belief feel completely rational.
CLI-based agentic tools like Claude Code and Gemini CLI are incredibly powerful. In my work, I meet customers and partners almost daily who ask the same question: how do I actually get started? If you’re curious about Gemini CLI, this walkthrough from Google is the best introduction I’ve seen so far.
I quickly figured out that using tmux or having multiple terminal tabs open let me watch several agents work on different things at the same time. I’d prepare work items for them, queue everything up, and then walk the dog while they ran. When I came back, there’d be results waiting for review across three or four streams.
It felt like I had cracked the code for parallel productivity. But here’s what I didn’t account for: the agents don’t need oversight breaks, but I do. Every time I reviewed an agent’s output, I was making judgment calls — is this correct? Does this fit the architecture? Is this going to break something downstream? Multiply that across four parallel streams and the cognitive load quietly stacks up. It’s not the work itself that drains you. It’s the constant technical oversight, the context-switching between problem spaces, and the mental cost of maintaining quality across all of them.
I was achieving at a level I hadn’t been able to reach before. But the tiredness kept creeping up. Not because I wasn’t sleeping — I was, mostly. It was that my mind never really took breaks the way it used to. Even when I stepped away, part of my brain was still reviewing agent output.

If you know hardware, you know the term thermal throttling — when a CPU is running so hot that it has to slow itself down to avoid damage. The output stays high, but the system is degrading underneath. That’s what was happening. I wasn’t checked out. I was too checked in.
Managing the Manager
Here’s the irony: I had literally just stood in front of a room full of leaders and told them that the hardest part of agentic AI isn’t the technology — it’s the human side. And then I went home and kept prompting.
If we’re all agent managers now, the first agent we need to manage is ourselves.
I’ve started small. The first thing I did was implement a WIP limit — Work In Progress, a concept from lean manufacturing. Today, I work on one exploratory project at a time. Not two. Not “one plus a quick side thing.” One.
It sounds trivial. It’s not. When you’re used to parallel-processing two architectures and context-switching between an agent runtime and a cloud platform, forcing yourself to single-task feels like driving in first gear. But your brain’s cache actually gets to clear. The quality of your focus on that one thing goes up. And you end the day less drained than you started.
The second thing was more creative. I realized that my willpower alone wasn’t going to get me away from the terminal. I know myself too well — if I can keep working, I will keep working. So I did what any good architect does when a system needs an external health check: I outsourced it.
I rehired my personal trainer. Paid twelve months upfront.
The irony is not lost on me. I build autonomous agents designed to operate without human intervention, but I need to pay an actual human to make sure I leave the house. It turns out it’s much harder to stand someone up than to ignore your own calendar reminder. The trainer is an external interrupt — a forced system reboot that my internal scheduler can’t override.
I’ve also started playing more tennis and hitting the gym regularly. These aren’t hobbies anymore. They’re system maintenance. In the old days, we had natural breaks built into the development cycle — compiling, deploying, waiting for CI. In the agent era, those breaks are gone. The feedback loop is instant. If you want downtime, you have to manufacture it.
The Brakes Matter More Than the Engine
Eight months of building at full speed taught me something that no documentation or API reference ever could: the hardest problem in the agent era isn’t technical. It’s human.
I see the internet obsess over which model is best and which framework is fastest. But the bottleneck isn’t the technology. It’s us. Our attention. Our energy. Our ability to produce sustainable, high-quality output.
When I talk to technology leaders, I don’t start the conversation with “I’m going to increase your output.” I start with quality. If you just try to go faster, quality doesn’t just might suffer — it will suffer. The goal should be to raise the lowest level of quality across your teams. Help them produce at a consistent velocity, but with higher standards. If you start there, you’re going to have a much better time adopting agentic AI than if you just try to squeeze more output out of your developers.
The goal for 2026 isn’t just to build the fastest car. It’s to make sure the driver can handle the speed — and that the road is worth driving on.
Everyone who embraces AI will go through an adjustment period. Some people can work intensely for a short burst. Others can sustain it longer. But everyone has a limit, and it sneaks up on you.
And if you’re a people manager — this applies to you too. Help your team adopt AI in a way that raises quality, not just velocity. Find the people with the hunger — the ones who are curious, who want to experiment with agentic tooling — and give them the space to try things. Tie those experiments back to a real business process. That’s where the value starts.
But here’s the thing: if you’re a manager who isn’t interested in using AI yourself, but expects everyone else to figure it out — you’re asking your team to navigate terrain you’ve never walked yourself.
This is Part 2 of a series about what I learned using AI in 2025. Read Part 1 if you missed it.