This article comes from Nick Nolan’s talk at our Washington DC 2025 Generative AI Summit. Check out his full presentation and the wealth of OnDemand resources waiting for you.


What happens when a powerful AI model goes rogue? For organizations embracing AI, especially large language models (LLMs), this is a very real concern. As these technologies continue to grow and become central to business operations, the stakes are higher than ever – especially when it comes to securing and optimizing them.

I’m Nick Nolan, and as the Solutions Engineering Manager at Fiddler, I’ve had countless conversations with companies about the growing pains of adopting AI. While AI’s potential is undeniable – transforming industries and adding billions to the economy – it also introduces a new set of challenges for LLMs in production, particularly around security, performance, and control. 

So in this article, I’ll walk you through some of the most pressing concerns organizations face when implementing AI and how securing LLMs with the right guardrails can make all the difference in ensuring they deliver value without compromising safety or quality.

Let’s dive in.

The growing role of AI and LLMs

We’re at an exciting moment in AI. Right now, research shows around 72% of large enterprises are using AI in some way, and it’s clear that generative AI is definitely on the rise – about 65% of companies are either using it or planning to. 

On top of this, AI is also expected to add an enormous amount to the global economy – around $15.7 trillion by 2030, but let’s keep in mind that these numbers are just projections. We can only guess where this journey will take us, but there’s no denying that AI is changing the game.

But here’s the thing: while the excitement is real, so are the risks. The use of AI, particularly generative AI, comes with a unique set of challenges – especially when it comes to ensuring its security, quality and accuracy. This is where guardrails come into play. 

If organizations do AI wrong, the cost of failure can be astronomical – not just financially, but also in terms of reputational damage and compliance issues.

LLM economics: How to avoid costly pitfalls
Avoid costly LLM pitfalls: Learn how token pricing, scaling costs, and strategic prompt engineering impact AI expenses—and how to save.

The security and accuracy challenges of LLMs

So, what’s holding companies back? One big issue is security. According to a study from OWASP, some of the biggest concerns for LLMs and generative AI are:

  • Prompt injection – where attackers manipulate AI systems.
  • Sensitive information leakage – where AI accidentally shares private data.
  • Data poisoning – corrupting the training data to mislead models.
  • Misinformation - when AI product false or misleading information.

With all these risks, it’s no surprise that about 30% of generative AI projects are abandoned after the proof of concept phase. These projects often face tough questions from stakeholders who want to know: “How can we be sure this will work in the real world? Can we trust it?”

It's a bit of a double-edged sword; companies know AI is the future, but they also know it’s a complex and potentially risky technology to implement. If AI models aren’t secure or performant, they could do more harm than good.

The fall of centralized data and the future of LLMs
Gregory Allen, Co-Founder and CEO at Datasent, gave this presentation at our Generative AI Summit in Austin in 2024.

How to balance innovation with risk management

In my experience working with companies across industries, I’ve seen the balancing act they’re constantly doing, attempting to push the boundaries with AI while managing the risks that come with it. The key concerns organizations have are:

🔒 Security – Ensuring that sensitive data is never exposed.

Speed – AI has to work fast, especially when it’s customer-facing.

💰 Cost – Running AI models can get expensive, and organizations want to keep costs manageable.

Quality – At the end of the day, AI needs to deliver the right output, consistently.

In an ideal world, AI would work without fail, but in practice, it’s a bit more complicated. That’s why companies need robust tools to monitor, secure, and improve their models and applications in real time, and that’s where guardrails come in.

Top gen AI, LLMOps, agentic AI, and CAIO events to attend in 2025
Save the dates now because 2025 will be a busy and exciting year for generative AI, LLMOps, agentic AI, and Chief AI Officers (CAIO).

What are guardrails for AI, and why do we need them?

When I talk about guardrails, I’m referring to mechanisms that allow us to monitor and control AI models in real time. They’re designed to prevent issues before they escalate, ensuring that the AI system doesn’t go off-course or cause harm. 

Think of it like having a safety net – if something goes wrong, you can catch it early and fix it. Guardrails should be able to address issues like:

  • Security concerns – Stopping malicious behavior like prompt injections or data leaks.
  • Safety issues – Ensuring that models are not generating toxic or harmful content.
  • Quality assurance – Verifying that the outputs are accurate and useful.

These guardrails are a critical part of the AI lifecycle. The goal is to enable organizations to trust the models they’re using, knowing they have oversight at every step.

Security and privacy issues in cloud computing
The business world is changing rapidly, and the rise of cloud computing has created huge security and privacy concerns.

Fiddler’s role in AI security and accuracy

At Fiddler, we’ve built an observability platform that’s designed to help organizations monitor their AI systems, both generative and traditional, in real time. This is about giving teams visibility into how their models are performing and offering a way to step in and adjust things when necessary.

For example, let’s talk about hallucinations – those moments when AI generates content that’s inaccurate or completely made up. It’s a common issue, but one that can be hard to catch without the right tools. With our platform, you can track over 50 different out-of-the-box LLM metrics plus custom LLM metrics, which help identify when these hallucinations are happening and how to fix them.

We also provide tools for monitoring metrics like PII detection (sensitive information), toxicity (harmful or biased content), and jailbreaking attempts (efforts to manipulate the system). These features are crucial for preventing AI from going off track and ensuring it’s operating within secure and ethical boundaries.

Looking ahead

As AI continues to evolve, security challenges will only become more complex. The threats will keep changing, and we’ll need to be vigilant to keep pace. 

By implementing robust guardrails and monitoring systems, we can ensure that AI remains a force for good, helping organizations unlock their potential without introducing unnecessary risks.

One thing is for sure: responsible AI will become a necessity, not just a luxury. It’s not just about building powerful systems – it’s about making sure those systems work reliably and securely in the real world. 

AI should work for us, not against us, and ensuring its accuracy, quality, and security is key to making that happen. Try Fiddler Guardrails for free to start protecting your LLM applications.