AI applications are extremely demanding, placing stress on hardware like CPUs. IoT users generate incredible amounts of data, and with a projected 125 billion connected devices by 2030, it’s easy to wonder just how much bandwidth the cloud will need to consume and what absurd levels of latency users could expect. And these impressive amounts of data need the proper hardware to support it.
With video becoming more and more important in markets like robotics, manufacturing, automation, healthcare, public safety and security, there’s an increasing amount of data being created, captured, and analyzed. All this data needs to go somewhere, and that’s where issues start. And just sending everything through the cloud can lead to higher latency and bandwidth use, which has opened the door for computer vision at the edge.
Edge computing is an extension of the cloud, offering storage, networking, and computing resources to the devices that, in turn, provide services for end devices - or edge servers. According to Gartner, by 2025, we’ll see edge computing processing 75% of data - that’s not that far away anymore, which means we’ll soon see edge computing practically taking over before we know it.
In this article, we’ll be diving into edge computing and how it’s solving deployment at the edge, including:
- Edge computing and cloud computing: the basics
- What are the benefits of edge computing?
- What challenges is edge computing facing and how do we solve them?
- Intel’s case studies
- Final thoughts
Edge computing and cloud computing: the basics
Cloud computing is the delivery of different computer services over the internet, typically on a pay-as-you-go plan, of servers, databases, software, storage, analytics, and more. It helps keep the costs down as an on-demand service, meaning you can use your budget to scale your business.
Although offering a number of advantages, these services are centralized in a collection of data servers, and, as such, can be accessed from any internet-connected device. This, however, introduces levels of latency due to the distance between the data centers and the end users.
Cloud computing is best suited for:
- Smart light systems
- Video camera systems
- Traditional applications
Edge computing brings computing close to the data - or as close as possible - to make sure there’s lower bandwidth use and latency. It moves some of the processes that the cloud delivers to on-premise, which can be to an edge server.
Edge computing is best suited for:
- Traffic light control
- Autonomous technologies
- Smart devices
Edge AI (the processing of artificial intelligence algorithms on edge), deriving from edge computing, lets you perform high amounts of computation on edge devices that are close to the data source, and not on remote servers in the cloud. This can be extremely useful in public security and safety, where computer vision helps to alleviate the load on the network, which is what we need when dealing with the high amounts of data that hundreds of cameras can generate.
But AI and computer vision aren’t stand-alone applications, and these have new system requirements, like enhanced cybersecurity, video codec, pre-processing and post-processing, and networking and communications (like 5G).
The typical requirements for computer vision are:
- Analytics. Video analytics and AI from 2 to 32+ channels
- Media. 1080p H.264 decode from 8 to 64v channels
- Storage. Enhanced I/O, Optane Memory
- Security. Hardware-based security technologies to help mitigate attack vulnerabilities
- Long lifecycles. Up to 15-year product availability, as a guideline
All of these benefit from having general purpose processors with built-in AI acceleration, because running pre and post-processing or deploying video codecs sometimes need embedded CPUs. The edge and the cloud exist together, with edge AI applications being an extension of the cloud and not a substitute.
What are the benefits of edge computing?
1. Higher speed and lower latency
The many different IoT applications running together in the same central server can really slow it down. And, with extremely high amounts of data being processed, having the server fail will lead to your devices failing as well.
Edge applications process data locally, and, as such, devices don’t have to always be connected to the cloud data center. This means that new IoT technologies can be developed, even in areas that have slower response times, or even areas that don’t have a capable network infrastructure.
And because it can make decisions by itself, real-time performance improves, which improves response speed and minimizes delays in the transmission of data. This real-time analysis of data is essential for different industries, like robotics, healthcare, and avionics.
2. More scalability
Having efficient data processing and analysis with IoT devices lead to the need for new and innovative technology with this capability. When processing video data, for example, having hundreds or even thousands of sources connecting at the same time needs a scalable service that won’t bottleneck at the edge - something the cloud can’t entirely do by itself.
Edge AI can be scaled for complex analytical needs, offering reduced latency. It needs local compute power and investment in hardware, but it becomes more cost-efficient on a long-term basis.
3. Less costs
With the growing network of IoT devices and real-time data processing, costs inevitably increase. Heavy amounts of data from CCTV running 24/7, for example, can be expensive, which heavily cuts into your budget. A fast analysis of real-time data streams on the cloud needs a lot of cloud service capabilities. And on-demand cloud services don’t always cut it - at least, not where your money’s concerned.
Edge computing allows for interoperability of modern smart devices and legacy devices that can otherwise be incompatible. It converts communication protocols that legacy devices use into a language that modern smart devices can use and understand. So, you can connect your older devices without buying expensive equipment.
4. Higher cybersecurity
Less data on the cloud, fewer chances of an online attack. With edge computing, this risk can be distributed throughout a variety of devices, as the data exchange involves the internet, servers, and nodes, each of which can add more security measures. As it can conduct processing without being connected to the central server, it’s a more private and secure architecture.
By processing data locally and not sending it to the cloud, you can both have real-time results and make sure that access without permission is as difficult as it can be. With CCTV, for example, sensitive data can be there and gone in an instant, making it more secure and private. Any issues at one edge can be solved without affecting other parts of the system.
What challenges is edge computing facing and how do we solve them?
While edge AI and computer vision are growing quickly, they’re presenting their own challenges when running deep learning or natural language processing technologies in real-world devices. There’s a need for enough performance that comes at a low cost and low power when we’re deploying at the edge, while also having the right algorithms that offer the real world accuracy that is needed.
Smart endpoints, like cameras, are connected to an on-premises edge - like an AI appliance - then to the cloud where there are storage servers and a network attached storage.
So, the main challenges are:
1. High performance but low costs
Companies, like Intel, are recognizing and understanding the need businesses are facing for more processing at the edge. With over 1,000 partners deploying all types of technology and with a variety of businesses innovating the deployment of video analytics, having the right tools is essential.
As businesses in all industries adopt AI at the edge for computer vision applications, Intel is seeing a trend in emerging network video recorder appliances, which are boosting analytics at the edge, such as:
a. An edge converged server, or the all-in-one server.
The edge converged server is an on-premise server, which can be as big as a data centre server, combining high-performance compute, storage, video management, recording, and analytics while doing everything in one place.
b. An edge AI box appliance, when you don’t need an upgrade.
The edge AI box appliance can be added to an existing infrastructure if you already have a network video recorder. It takes as many channels as needed to do analytics on, being a low cost solution for adding intelligence to your edge network.
So, what is the Intel offering for deploying at the edge? With a wide range of tools and ingredients (all of which you can explore more in the talk with Gary Brown), they’ve got:
- Silicon platforms
- Reference designs
- Edge AI accelerator cards
- Connectivity accelerators
- Edge software hub
- DevCloud for the edge
- Intel® CPU with built-in accelerator
- Discrete accelerators
Taking a closer look at a couple of development systems:
11th Gen Intel® Core™, or Tiger Lake, for meeting needs for AI at the edge:
- Built-in acceleration in its architecture
- Double the decode channel density, compared with Intel® Gen 9 media
- Embedded Xe graphics core and GPU
- Runs Intel’s OpenVino™ toolkit ecosystem
- Minimal amount of equipment upgrade needed
3rd Gen Xeon®, or Ice Lake, for edge converted use case:
- Hardware-enhanced security
- Scalable and dynamic
- High-performance compute, storage, video management, recording, and analytics on the edge
- Compelling software and hardware AI performance to address deep learning and analytics needs that arise from the vast amounts of data on the edge
- Intel is working on extending its life, temperature range, and improving fortifications for IoT deployments
Portability can be an issue when you have to re-optimize the application for high efficiency if you’re mapping to a different processor. Intel’s OpenVino™ toolkit ecosystem can port the application once and optimize again when moving to a different processor.
Taking a closer look at the software tool OpenVino™:
- Powered by one API
- Supports deep learning, computer vision, and hardware acceleration with heterogeneous support
- Helps accelerate solutions over multiple hardware platforms
- Allows for the movement of vision intelligence from edge to cloud
Intel’s case studies
How is Intel helping their partners and businesses as a whole deploy at the edge? Using their development systems and software tools, you can read through three case studies with real world application and success:
CoreAVI delivers cutting-edge safety advances in technology in the aviation, automotive, and autonomous spaces industries. In partnership with Intel, using the 11th Gen Intel® Core™ with the embedded GPU, CoreAVi will have a graphics and compute platform for safety critical cockpit displays, safe autonomous systems, and mission computing.
This offers a range of safety critical features and display processing that aren’t just connected to AI itself. The result? Up to four 8k displays of display processing. In addition, they can also get computer vision and AI applications deployed at the edge in aircraft, for higher performance and more capabilities in safety critical applications.
Prolife Foods and meldCX
Intel’s partner, meldCX, has been working with Prolife Foods to deploy computer vision and retail POS systems. With product loss being a huge issue, it’s necessary to accurately detect what products consumers are buying - and do it quickly and efficiently. Being able to deploy this system with high performance, low costs, and accuracy can be a big challenge in retail.
As such, and powered by Intel’s 11th Gen Intel® Core™, meldCX offers a smart scale solution with their software, which detects the types of foods consumers buy and can use computer vision accurately at the checkout counter.
Deploying computer vision in a shopping centre with hundreds of thousands of cameras in need of running analytics results in a higher need for high performance servers. With consumers going back, whether cameras have to monitor social distancing or other computer vision needs, this is where the 3rd Gen Xeon® comes in. It has enough AI and computer vision to run these analytics across all video channels.
In partnership with Intel, Claro 360 will develop intelligent video monitoring solutions for better security, monitoring, user identification, and more. Compared to the last gen, Cascade Lake, the 3rd Gen Xeon® gives a performance improvement of 45-46% on average, and, in image classification, it gives 56% performance improvement.
Edge computing is growing, and soon we’ll see more hardware and software applications helping the deployment of computing services across numerous devices. Many companies are already deploying on the edge and seeing its benefits, and while cost and performance can be a major factor in how they deploy, emerging technologies are quickly tackling these issues.
Fancy more edge AI info? Why not join us for our Edge Fest, going live 19-21 October 2021. Get your tickets here!