At most enterprises right now, AI adoption has a strange split personality.

On the surface, there are the sanctioned initiatives: a strategic LLM partnership, a vendor-led pilot, and a few carefully worded policy documents. Underneath, there’s the real activity. Engineers quietly connect AI applications to business data and SaaS tools using the model context protocol (MCP). 

Employees are experimenting with open-source tools and their favorite AI clients at home because they can’t get them past security. Teams use “unofficial” AI applications to get work done faster, then copy the results into approved systems.

If you listen to a lot of enterprises, the story is always the same:

“We aren’t seeing the ROI.” “Our company is nervous about security.” “We can’t figure out what business problem to solve first.”

But as one of the panellists in our Boston summit said, if the customer is always the problem, you’re probably the problem.

There’s a deeper issue at play here. It’s not that enterprises aren’t interested in agentic AI. It’s that they’re being asked to adopt it on infrastructure and governance models that were never designed for autonomous systems in the first place.

That context is exactly why Barndoor AI exists. Founded by Oren Michels, Barndoor focuses on enabling organizations to adopt agentic systems without losing control of security, compliance, or operational integrity. 

Rather than asking companies to bolt AI onto legacy environments, Barndoor rebuilds the foundations, providing the trust layer enterprises need to use autonomous agents responsibly and at scale.

In our panel with Barndoor at the Generative AI Summit in Boston, featuring Oren Michels, Co-Founder & CEO of Barndoor.ai, and Quentin Hardy, Principal at LGTM LLC, that tension came through over and over again. What emerged was not another “AI is the future” story, but something more grounded:

  • AI is already here, often in unsanctioned ways.
  • The real bottleneck isn’t model capability. It’s control, visibility, and trust.
  • And if we don’t solve that, we’ll end up with a lot of activity, but not a lot of durable value.

This article is about that gap and why Barndoor has chosen to close it.

The real adoption problem: AI without a business centre of gravity

The industry right now is incredibly “buzzword compliant.”

LLMs, frontier models, RAG, MCP servers, agentic systems – we have an alphabet soup of technologies and protocols. But very few of those terms tell a CFO, a Head of Ops, or a CIO what problem is actually being solved, what the new workflow looks like, or what it will cost to run in production.

That disconnect showed up repeatedly in Boston.

On one side, you have hyperscalers and large software vendors selling increasingly large bundles of “AI platform + orchestration + consulting.” They are racing to own the orchestration layer because, as the panelist dryly noted, “when you control the orchestration, you control the revenue.”

On the other side, you have teams inside enterprises that don’t need another giant platform.

They need:

  • A safe way to let people experiment at the grassroots level.
  • A way to stop AI from doing destructive things.
  • A way to see which experiments are actually working, so they can spread those patterns.

What they have instead is a choice between:

  • Locking AI down so hard that it becomes useless, or
  • Turning a blind eye while “shadow AI” grows outside governance.

Neither of those ends well.

Building your agentic stack: A roadmap to real integration
Building an agentic AI stack is chaotic, but strong architecture and data foundations stay constant. Here’s what truly matters for long-term success.

Shadow AI is the new BYOD

If this feels familiar, it’s because we’ve seen this movie before.

The panel drew a direct line back to the early mobile and cloud eras. BlackBerry and early smartphones were not adopted because IT blessed them in a strategy document. They were adopted because sales teams bought them with their own budgets, used them to close deals faster, and forced the organization to catch up.

AWS snuck in through side doors when servers got cheap enough for engineers to expense them. One large company famously miscounted its server fleet by 100,000 machines because so much of the infrastructure had been acquired locally, not centrally.

The same pattern is happening with AI and MCP:

  • It’s trivial for a developer or power user to download an open-source MCP server and connect it to Claude or Cursor.
  • Many of those MCP servers are spun up just to show a team is “AI compliant” - then sit unmaintained, with no visibility, growing metaphorical cobwebs.
  • Even in highly regulated environments, people test AI workflows from home first because they can’t get access to the right infrastructure inside the firewall.

Officially, the organization is moving cautiously. Unofficially, AI is already threaded through workflows in ways that security and IT can’t see.

This is not a temporary phase. It’s the default behaviour of ambitious people who want to get more done. If your best people aren’t doing this, you probably have a different problem.

The question is not “How do we stop this?” The question is: How do we turn this into something safe, visible, and scalable?

Why the AI chat interface leaves most of the company behind

There’s another structural problem that came up on the panel.

Most of the high-profile success stories in AI today are developer-centric. Coding copilots make total sense as a chat interface because software engineers already work by talking to complex systems: compilers, debuggers, and senior colleagues.

It does not make sense for teams outside of engineering.  

Sales, operations, finance, compliance - these jobs are not built around “asking a PhD-level intelligence for suggestions.” They are built around processes, systems of record, and long-lived workflows.

AI that only exists in a chat window will remain stuck in “suggestion mode” for these teams:

  • It can surface documentation.
  • It can summarise information.
  • It can generate content.

But it still expects a human to take the final action in Salesforce, update the ticket, change the entitlement, or close the loop.

That’s useful, but it’s not agentic.

On stage, Oren Michels offered a sharper definition: agentic AI is not about suggesting what a human should do next. It is about the AI actually doing the thing - with the right guardrails, on the right systems, under the right identity, and with visibility

That’s where both the opportunity and the risk explode.

When GPT-5 thinks like a scientist
GPT-5 is transforming research with novel insights, deep literature search, and human-AI collaboration that accelerates scientific breakthroughs.

Agentic AI as “enthusiastic interns”

One of the more memorable metaphors from the session was this:

Think of your AI agents as very enthusiastic interns.

They are eager. They are fast. In many cases, they are surprisingly capable. But they lack context. They don’t understand your culture, your history with a customer, or the subtleties of your regulatory environment. If you give them access to everything on day one, you are setting them – and yourself – up to fail.

With human interns, we intuitively understand this. You bring someone in. You:

  • Give them limited access to systems.
  • Ask them to complete specific tasks.
  • Watch how they go about it.
  • Increase their scope as they demonstrate judgment and reliability.

If they handle sensitive information poorly or break the process, you pull them back, coach them, and reassess.

Agentic AI needs the same pattern – but encoded into the infrastructure, not left to informal norms.

This is the space Barndoor wants to create solutions in: governing what AI agents can see and do across MCP, systems of record, and enterprise workflows, with the same seriousness we apply to human identity and access management. 

Why governance isn’t a brake - it’s how you get to action and scale

Governance has a reputation problem.

Inside a lot of organisations, it’s seen as the team that says “no” - the group that appears late in the game with redlines, mandatory reviews, and a long list of controls that assume the worst.

The panel took a different view: governance is how you enable growth without losing control.

Before we get into the risks, it’s important to be clear about what’s actually happening inside MCP. Every MCP server exposes a set of “tools”; the specific actions an AI agent can take, such as updating records, fetching data, modifying permissions, or initiating workflows. 

These tool calls are what make agentic AI powerful, but they’re also what make it risky: without the right guardrails, an AI agent can take actions a human would never be allowed to.

Without proper AI controls over MCP and the tools agents can call, you quickly end up with exactly the risks enterprises fear most:

  • Shadow AI: unsanctioned apps bypassing security.
  • Data leaks: sensitive information being sent to places it shouldn’t be.
  • Unrestricted access: over-permissioned agents modifying or deleting critical business data.

You can try to lock all of this down with policy PDFs, firewalls, and “do not use” announcements. Or you can accept that people will keep experimenting, and give them a structure where:

  • Only approved MCP servers and tools are available.
  • Every AI app and agent connects through a secured gateway.
  • Access is defined at the level of user, role, system, and action – not just “on” or “off.”
  • Every call is logged and visible, so you can see both the success stories and the actions your policies blocked. 

That’s effectively what Barndoor is positioning as: the control plane for the agentic enterprise, a place where you can centralise visibility and policy for AI agents across your employees and business data. 

It’s not about slowing people down. It’s about making sure their creativity doesn’t outpace the organisation’s ability to manage risk.

Why governance is a growth enabler for AI

Strong guardrails don’t kill experimentation. They make it safe to move faster. Barndoor calls this “the control plane for the agentic enterprise”, governance that unlocks, rather than blocks, AI adoption.

Case Study: Loveable
Loveable, the Stockholm-based “vibe coding” platform, is demonstrating that Europe is still a prime incubator for global AI unicorns.

From hidden wins to repeatable success

One of the most useful points in the panel was subtle but important: governance isn’t just about catching bad things. It’s about discovering good things.

If you have no visibility into MCP traffic, you don’t just miss security issues. You also miss:

  • The engineer who quietly automated a tedious reconciliation workflow.
  • The support team that wired an agent to resolve certain ticket types end-to-end.
  • The operations manager who built an AI-driven scheduling workflow that saved hours each week.

In a world without a control plane, these wins stay local. They live in private repos, personal workflows, and small teams. They never turn into organisational patterns.

With a proper governance and observability layer, you can:

  • See which AI workflows are emerging.
  • Quantify their impact.
  • Turn them into reusable patterns for other teams.
  • Learn from failures just as deliberately as you learn from successes.

This is where Barndoor’s focus on “visibility, accountability, and governance” becomes non-negotiable. It’s not trying to orchestrate every agent. I-guide what’s already happening, so enterprises can move from isolated experiments to a genuine getting real value out of agentic AI.

What this means if you want to be the “AI hero” in your company

The panel ended with a simple challenge to the audience: if you want to be a hero inside your organisation, you have to play both sides.

You have to acknowledge that your colleagues are already using AI, sometimes in ways that make your security team nervous. And you have to help design a path where:

  • Experimentation is encouraged, not punished.
  • Failure is treated as learning, not as a reason to shut things down.
  • Governance is baked into the plumbing, not bolted on at the end.
  • AI agents are treated like interns: limited at first, then progressively trusted as they prove themselves.

That’s not a role that belongs solely to vendors or solely to internal teams. It’s a partnership.

Barndoor’s bet is that enterprises will need a dedicated control plane for this – something that is built for AI agents, MCP connections, and complex policies deeply enough to be more than “identity, but with a new coat of paint.”

Whether or not you adopt Barndoor specifically, the underlying idea is hard to ignore:

If we want AI agents to stop living in the shadows and start doing real work at scale, we need to give them the same kind of structured, observable environment we give human workers, but built for AI. Granular permissions. Training wheels. Feedback loops. Visibility.

The companies that break that win will be the ones that treat governance not as a gate, but as the infrastructure that makes agentic AI genuinely safe, accountable, and transformative.