What I’m actually worried about
Invisible systems, untraceable decisions, and the real risk of enterprise AI
Updated April 19, 2026

I started paying close attention to AI in late 2022.
I was at a client site the week ChatGPT launched, a Swiss enterprise where a PowerPoint deck goes through six approval layers before reaching a steering committee. By Wednesday, a marketing director had generated a full campaign brief with GPT-3.5 and sent it to her team with “what do you think?” in the subject line. By Friday, legal was in the thread. By the following Monday, an informal ban was in place. By Thursday, three people had already found ways around it.
I had seen this before. Not with AI, but with cloud, APIs, mobile, BYOD. The pattern is always the same: a new capability appears, someone uses it, governance reacts, and workarounds emerge. What was different this time was the speed. The gap between what an individual employee can do and what the organisation can control used to be measured in years. For cloud, roughly five. For mobile, three. For API programmes, maybe four. For AI, it is closer to twenty minutes.
That gap is the thing I have been thinking about ever since.
For context, I have spent the last eighteen years on the enterprise side of technology, working on integration architecture, middleware, DevOps, and programme delivery across organisations like Swisscom, UBS, Philips, and Roche. I am now a Senior Account Partner at Salesforce. Across all of that, the constant has been the same problem: how to close the gap between what technology makes possible and what an organisation can realistically absorb.
Most of my career, I have been on the “make it work” side. I built integration layers, designed governance for API programmes, and ran PoCs that worked in isolation but broke the moment they touched production. I am not a researcher. I learn by building things and seeing where they fail. Since 2023, that has meant working hands-on with AI, from fine-tuning small models on a home lab with an RTX 4090 to building agent workflows that looked convincing until they met real-world constraints. That experience led me to a simple conclusion: in enterprise settings, the problem is not the model. It is everything around the model.
What worries me, then, is not superintelligence. I understand why it dominates the conversation, it is a clean and legible risk. But it is not the one most organisations will face first.
What worries me is the gap.
More precisely, the gap between what a single knowledge worker can now do with AI, and what the organisation around them is equipped to see, control, or even be aware of. That gap is closing in the wrong direction. Individual capability is accelerating quickly, while organisational absorption is not. And organisations that cannot absorb a new technology do not stop using it. They use it anyway, outside the structures that are supposed to govern it.
We have already seen the early versions of this. In 2024, it looked like employees pasting sensitive data into ChatGPT. In 2025, it looked like small agent workflows running inside departments that IT had never inventoried. In 2026, it increasingly looks like systems that take actions: sending emails, updating CRMs, triggering payments, often running under service accounts created for entirely different purposes, owned by people who may no longer even be in the company.
None of these will cause a single catastrophic failure. What they will create instead is a slow accumulation of decisions made by systems that nobody explicitly approved, with no reliable record of what was decided or why. At some point, someone will be asked to explain one of those decisions. Not in theory, but in a room, under pressure, with consequences attached. And the honest answer will be that it cannot be reconstructed, not because the AI was malicious, but because it was never visible in the first place
That is what worries me.
I am writing a book about this. It is called Govern or Fail, and it will be published in late 2026. The argument is straightforward: enterprise AI scales when governance is built into the architecture, and fails when governance is treated as policy alone. The book lays this out in practical terms, with frameworks, operating models, and a cost perspective that shows why governed systems eventually outperform ungoverned ones. It also addresses the friction, the internal resistance, and the organisational realities that tend to derail these efforts.
It is not a pitch for a product or a consulting service. I am writing it because I have been asked the same questions often enough by people who need concrete answers, and I have not found a book I would confidently hand to a CIO.
This site is where I am working that thinking out in public. Some of what appears here will be drafts of the book. Some will be notes from experiments, what worked, what failed, and what I misunderstood. Some will be commentary on how the regulatory landscape, particularly in Europe, is evolving when viewed from inside organisations that actually have to implement it.
I am not going to publish on a fixed schedule. I will write when there is something worth saying, which will likely average a couple of times per month.
The audience is specific. The CDO whose pilot has stalled. The architect asked to define governance without the authority to enforce it. The CIO who is asked how the company is using AI and realises that the honest answer is incomplete.
If that is you, this is for you.
Fabio Aulico writes on enterprise AI, governance, and integration architecture. He is the author of the forthcoming Govern or Fail: The Executive Playbook for Enterprise AI That Actually Scales (self-published, late 2026).