My name is Helenio Gilabert. In this article, I’m going to tell you all about how we run artificial intelligence (AI) at Edge Solutions. It’s such a fascinating topic, but it can be a little overwhelming. So, I’m going to break it down into a few key talking points:

But before we dig into the meat of this article, let’s have a little background on what's challenging the industry today.

What’s challenging AI?

It would be the understatement of the century to say that we've been through an incredible amount of transformation over the last two years. Some of it has had some tremendous impact on the following:

  • How our customers operate.
  • The requirements that they're presented with.
  • The conditions they have to live under.
  • How quickly they're changing.

Of course, we also get compounding effects from supply chain issues and transportation issues, but for now we’ll stick with the main challenges facing us.

Remote operation requirements and cybersecurity

Giving employees the same access to resources remotely is hard enough, but then you have the very real threat of cybersecurity to contend with as well. Cyberspace can be a pretty scary place these days, and we’ve got to be vigilant!

Knowledge drain and knowledge transfer

A lot of people are starting to move out of jobs into different jobs, and even worse, into different industries. There's not enough people joining the industry. Here’s a good example. Where I live, Calgary, is the only gas capital of Canada. The University of Calgary had to suspend their oil and gas engineering program, because they did not have enough people applying to join it!

Capturing knowledge, and being able to transfer that knowledge is becoming critical!

Market volatility

Many companies have to react from one day to the other on new regulations, new mandates, and new problems with supply chains. It’s a fast-moving industry and keeping up with the advances, let alone the challenges, is not easy!

Environmental impact and sustainability

On top of all this, we have this incredible pivot towards new types of energy, and this awareness of environmental impact. This is a real hot topic today, right? And it’s no longer an option for organizations to lag behind, but it’s seriously affecting how companies operate.

So, the million-dollar question is, how can these new technologies help with all of this? But before we dig into that, let’s look at some common misconceptions around AI.

AI: perception vs reality

Slide of 'the current state of AI', with photos of well-known robots in films

Now, don't get me wrong, AI has been making incredible advances over the last few years. But as you can see, the public notion of AI is more based on science fiction rather than reality.

What AI is not

Well, what it certainly is not is this guy! 👇

Photo of the Terminator in metal skeleton form

Something that's gonna come up and take all our jobs and be our new overlords! Thankfully, that's not where we are today. AI is nowhere near that point of intelligence. Instead, it’s more of an extension of human actions. It augments what human beings begin.

What AI is today

AI is very good at specific tasks that it can learn from a human and replicate. From a practical standpoint, the aspect of AI that is more common today is machine learning.

Machine learning replicates human behavior. It's certainly not at the stage where it’s going to come and hunt us all. Artificial intelligence takes what a human being can do and makes it more efficient. In this way, it frees human beings up from duller time-consuming work and allows them to focus on the things that only human beings can really do well.

AI in the industrial space

Slide of AI in Industrial Space

In the industrial space, we've been focusing a lot around machine learning. That's really the heart of AI: seeing what a human does, and replicating it in a more efficient way. And there's different techniques you can apply.

Our main goal for AI right now is to allow it to take the knowledge of an operator, capture it, automate it, replicate it, and deploy it at scale.

AI: from specialized to mainstream

Why do we firmly believe that Edge Computing is an incredible enabler for implementing AI at scale? The reason is because of the incredible advancements we've seen in IoT Edge controllers

IoT Edge controllers

These are small industrial PCs that have the following capabilities:

  • High levels of processing
  • Lots of memory and storage space.
  • Plenty of connectivity options.
  • Very reasonably priced.

One of the problems with AI in the past is that it was very expensive to run. And luckily for us, that’s no longer the case.

More options than ever before!

We now have more and more platforms that allow us to create machine learning models to train and build this technology. You don’t have to be a whizz on data science! The new platforms allow us to present data in a contextualized way.

What’s the result of this? Operators can understand what they're doing and interact with the data in a way that doesn't require them to be an expert in data science.

Platforms allow us to package these models and deploy them

This is regardless of your infrastructure. This used to be a really limiting factor before. But now you can take these models, package them in a Docker container, and deploy them in your cloud environment.

Or you can take that same model in a Docker container and deploy it in an Edge controller. These platforms allow you to treat all those two environments as a single line. You can target and deploy your applications where you actually need them.

We’ve changed the questions around AI

The question no longer is, what can we do with this technology? What can we afford? The question now is, where can we apply these technologies to get the maximum return on our investment? It's not a question of can we do it, it’s a question of should we do it.

Adoption of IT lifecycles in the OT space

One of the challenges you need to keep in mind when we're talking about industrial applications is lifecycle management. We’ve actually had Edge controllers in the industry for decades now, but we just called them something different.

The key difference

The key difference between the traditional industrial controller versus the new IoT controller is the lifecycle of the controller itself. If you go into the market and buy an RTU, or PLC from Schneider, for example, you can expect it to run for 20 plus years.

In the past, we would install these things and forget about them. That's how we used to operate in an industrial space for many years. The lifecycle of the controllers is very long. These new IoT controllers, on the other hand, are on a different lifecycle schedule.

Every couple of years you have a new model that has:

  • Twice the processing.
  • Twice the memory.
  • More storage space.
  • Double the capabilities and capacity.

Software-based architectures – no dependencies

If you look at industrial control controllers in the past, we had standards to program them, for sure. But every company had its own implementation of the standards. And they were not necessarily compatible with each other.

Once you develop a program, you usually were stuck with that vendor. There could also be compatibility issues between generations of controllers. There was a tightly coupled dependency between the software environment and the hardware environment. With these new Edge controllers, we're seeing a complete decoupling of that.

Appropriate for general applications

We're running on a standard operating system, usually either Linux or Windows IoT. We create an abstraction layer between the hardware and the software and the software really doesn't know or care too much about what hardware is running underneath it. You just have to meet the software requirements to maintain that separation.

On top of that, we have some virtualization, like Docker, for example👇

Logo of docker

This is the most common tool to provide an extra level of abstraction between the software and the application. Bottom line: it’s very non-dependent. We're breaking that dependency, which makes the lifecycle a little bit easier to manage.

Some drawbacks

The main one is reliability. What you get with the current generation of Edge controllers is very different from IoT Edge controllers, and very different from the reliability you expect and receive from industrial controllers.

They’re not built to last 20 years, but as long as you can break the dependencies between hardware and software, that is definitely manageable.

Possible solutions

That's when we start talking about the digital twins of the controller itself. That way you can easily replace controllers, throw the other away, put another one, and then the one you do have downloads all the current configuration from the digital twin.

There's no need to keep configuration software and reinstall, it's all taken care of automatically by the platform that's managing the environment.

Safety configurations

Usually, we won’t recommend IoT gateways for critical process applications and safety applications. And with critical process applications, you also have real-time requirements for the control.

They know you can get real-time from operating systems like Linux and others. But the industrial controllers are built from the ground up with that real-time capability rather than an add-on afterward.

From Cloud to Edge

People often ask whether it should be Edge or Cloud, but really, the reality is it shouldn't be one or the other. That’s right. It should be both!

Cloud accepting in OT is increasing

You should have an environment that brings them both together that you can choose based on the application. The question you’re going to be asking is, what is the best path to run your workloads on? What is your communication infrastructure? Right. So what do you have available? And one and the second one is.

Connectivity, bandwidth, and latency

How sensitive is your process to latency? If you take a few seconds to make a decision and execute a command, will that cost you money? You can then define and develop your architecture. You can choose to run your workloads either in the cloud or at the Edge.

Some factors to consider:

  • Is connectivity available?
  • Is it costly or cheap?
  • Do we have bandwidth restrictions?
  • In the time it takes to move things, how much does it cost me to move, move data around and store it in the cloud?

Another interesting factor to consider is: ‘Loss of land.’ This relates to the ownership of the data, classification of data, and potentially, the loss of where it's generated so that it cannot leave the country where it was generated.

You want maximum flexibility, and to adapt your architecture and solution to those requirements. But you don’t want to be constrained by the technology to one or the other. And you need to be able to respond and adapt to these conditions.

Adoption of commercial models

Together with this change in the adoption of cloud and availability of cloud, another change that has come from that adoption is the change in commercial models.

Usually, when we talk about ownership with subscriptions, most people fixate on CapEx vs OpEX. But this is not the only thing you should be considering when thinking about new licensing models that are becoming more and more common.

Risk and risk profile

There's implications for this. The most important one is risk and risk profile, and the associated behaviors. What’s the risk profile? The risk profile can be understood in this way. When you buy a system, you are taking all the risks of implementing that system, right? You pay the vendor and as soon as you pay them, the system is yours. The risk is yours.

The risk is now on the vendor

With the subscription model, it’s really the other way around. It's relatively easy to switch vendors. The risk now is on the vendor to make sure that you are successful. They need to make sure you're reaping all the benefits of the investment you're making.

If you’re not getting just an initial benefit or an ongoing benefit, why subscribe to that solution? You can just cancel your subscription and move on to something else.

Ensures ongoing commitment to customer’s adoption and success

This is a much better model to innovate because we are actually answering the needs and requirements of the customers. We’re engaged in more of a partnership. Now we have both the same interests: successful, valuable implementation.

It takes a village: the potential of AI

Here are a couple of suggestions, comments, and observations based on all the discussions we've had with customers in the industry.

A vibrant ecosystem

This change from ownership to subscribership of the new cloud environments has resulted in an explosion of innovation in the industrial space. There's an incredible ecosystem of companies producing new solutions, offering new services, and creating new technologies that can be implemented to help with your operations.

So, the moral of the story is, don't go it alone. There are many companies that can help you.

Choose the right partners

These are the partners that can actually deliver value, the ones that will work with you not only to sell you something but to implement it correctly and maintain it. In this way, you can continue to get the benefits of your investment.

The right partner will help you test, validate these new applications, and find the right use cases. They will then help you to implement them at scale.

Don't get stuck in pilot purgatory

Prove value and scale fast. This is absolutely critical. Don't get stuck in pilot purgatory. You take one little technology and you try to find out what it's useful for. You implement it here, implement it there, but you never try to scale it. And because of that, it’s usually not successful.

Make sure that with vendors, you select the right cases, provide value, and scale.

A fully Decoupled model

Slide of 'A fuly decoupled model'

You need to decouple software from hardware, but you also need to decouple the capital controllers from IO.

Hardware

We do this so that change is easy. We need new types of IO that controllers can subscribe to. You can have one controller pulling multiple IOs, or one IO being pulled by multiple controllers. That way, you can have software standbys, software redundancy, etc.

it opens up many options in terms of how to deploy and develop applications. And then you can also make sure that those IO modules support the newest protocols.

Software

On this side, make sure that you also split the life cycles of your applications and your models. You want to make sure that your models have the opportunity to fine-tune them to specific assets.

Ever heard that notion of one model for a whole asset class? Well, we don’t believe in it. In most cases, you need to be able to take a model and fine-tune it for a specific asset. You need to be able to have applications, check-in, and check out their own model. Or for efficiency, you should be able to group some of those models.

Maximize value

Now, if you can deploy your machine models successfully, you can start capturing that knowledge from the operators. They need to be fully interactive with the input of the operator and the operator needs to continue to train them. We present the data in a way that is contextualized so that they can understand that they can interact with it.

Then you can start capturing all these events and classifying them. You can attach workflows to those events, and start putting in flexible optimization strategies. You could say, for example, due to the market conditions, we need to focus on throughput.

Or, you might say that throughput is not the highest priority. We want to lower costs and extend the lifetime of our assets, as much as we can, lower energy consumption, or lower environmental impact.

You could start marrying workflows and events through machine learning. That way, you can have an agile organization.

To finish off: A real-world example

We have a platform we call ‘autonomous production advisor.’ This is built around the lifecycle of machine learning models per asset. It presents data in a contextualized base to the operators so they can continue to tag models, continue to extend the capabilities, or fine-tune a model for a specific asset.

We provide the capabilities to integrate with our own legacy or other legacy automation infrastructures. As you can see here, this is a use case in the oil and gas industry.👇

Slide of a real life example

Here, we deploy Edge controllers to enhance the capabilities of a pump controller, which runs on that industrial controller. So, the two devices are communicating with each other and one pulls a chart. We're conducting image classification with machine learning.

The shape of the image tells you exactly what's going on with that. So, in the case of oil and gas here, let’s say you have sand interference in the production, the shape of that chart will show you that.

We also do clustering analysis on unknown events, and the operators can go in and label those events en masse and continue to train the model over time.

Here you can see the real-world benefit of these technologies out in the world, and how they are able to assist human beings in essential day-to-day jobs. A far cry from becoming our overlords and taking our jobs, eh?