Computer Vision at the Edge is Mainstream Now

Computer Vision (CV) and Edge Analytics have been buzzwords for a while now. However, the combination of the two, which essentially allows organizations to run CV workloads at the Edge, is no longer just a buzz word — it’s happening right now with real, productive use cases. In this article, we’ll examine the drivers behind this, some architectural considerations and what challenges leaders should be aware of as they roll out such solutions.

Why is this happening now?

There are several reasons that are driving this adoption:

  1. Video cameras can be the ultimate sensor and are either already present (such as security cameras), or can easily be added to Edge environments as a parallel asset without requiring disruptive instrumentation changes.
  2. CV use cases at the Edge are extremely flexible and run the gamut from:
    o Quality inspection in discrete manufacturing
    o Safety related use cases which could map to detecting intruders or objects in restricted spaces
    o Detecting proximity, count, and flow of people in a given space to help with either physical distancing or understanding traffic patterns
  3. Improvements in a level playing deployment plane at the Edge through extension of cloud services or utilization of native technologies, such as Kubernetes and Containers.

Architectural considerations

Management and scaling remain the biggest challenges, and they need to be factored into an Edge strategy. Edge environments are heterogenous by nature and organizations need to have a plan to tackle this. This could include capabilities like:

Modular pipeline: It’s critical to build a modular pipeline that gets operationalized at the Edge. This may seem like somewhat of a heavy lift earlier on, especially for a single use case, but will be totally worth it as organizations look to scale, maintain, and add features in future. We generally see the following pattern:

  • Connect and acquire camera feed: This step is about being able to connect to source cameras using protocols such as RTSP. This could involve connecting directly to a camera or via an aggregation point such as VMS (Video Management Server).
  • Pre-process: This step is required to prepare the images for scoring, which could involve tasks such as down sampling, resizing, or cropping images. If there are special conditions that need to be checked (such as only process an image if a specific flag is on), then that logic could also be inserted in the pre-processing step.
  • Score: This is the core step where a Deep Learning model is used to score a (preprocessed)
  • image. These models can be trained based on historical data (typically data from the same cameras) or can be picked up from a library of pre-trained models. There are a few considerations here in terms of what training frameworks are used, what runtimes are required to operationalize these models, etc.
  • Post-process: Once an image has been scored, there is post-processing required to extract metadata (such as number of objects detected, distance between them), or to create visual annotation to help downstream consumption. The post-processing step also allows for handling of any specific rules around triggering an alert based on an object being in or out of a specific geographical area.
  • Output to desired destinations: Make sure this type of solution is integrated into a business process, a pipeline with this much output to destinations such as an existing alert dashboard, email, or text notifications, saving annotated images to a persistent storage, etc.

Edge to Cloud: While it is tempting to jump to Edge-only workloads, there is value in being able to design use cases that utilize both Edge and Cloud workloads. This allows for local processing and alerting to happen at the Edge, but a global fleet level view is created in the Cloud. This does not mean that Edge workloads cannot operate disconnected from Cloud – they absolutely can and must since that’s a core tenet. However, this pattern calls for Edge to connect to Cloud for writing alerts for facilitating a global, fleet-level dashboard, obtaining config changes, etc.

Management platform: Investing in a platform that allows central management of Edge infrastructure and workloads is essential. It is a key component in realizing the previously mentioned “Edge to Cloud” pattern. This is increasingly becoming easier with hyperscale Cloud providers now offering comprehensive capabilities in this area.

HW resource optimization: Deep Learning models are generally heavy with a lot of parameters and usually require acceleration to support performance. This is especially true for a certain class of models, such as classification. The good news is that there are excellent and proven HW resources, such as GPUs, to help support this acceleration. Now GPUs are relatively expensive and IT leaders have a goal to optimize their consumption. This is an important consideration when designing a CV pipeline – one benefit of a modular pipeline is that one can choose to only accelerate certain steps (such as scoring). There are also design patterns that allow sharing of the same GPU resource across multiple cameras – which is great as that may require only one GPU deployment in an Edge environment.

Challenges to keep in mind to achieve scale

CV use cases at the Edge have a potential to rapidly gain prominence in 2022 and beyond with better support for Edge to Cloud services from Cloud providers and increasingly level playing deployment plane via Kubernetes and containers. However, there are a few challenges that leaders must be prepared to handle to drive scale.

  • Change management: While the goal for most CV use cases at the Edge may be true closed-loop systems, it may be useful not to jump to that early in the game. For example, it may be useful to create a roadmap of use cases where the Edge system in the first phase is only detecting alerts and sending notifications, and a human persona is taking corrective action.
  • Technology choices: As discussed earlier, there are a variety of technologies involved here (GPUs, Deep Learning framework, Management platform, Connectivity, etc.). This requires thoughtful consideration to make sure technology choices are compatible and allow for scaling as organizations decide to add more use cases or sites. There are direct cost implications (both one time, such as GPUs, and ongoing, such as connectivity) that come with these technology choices for which organizations need to manage and budget.
  • Governance: Deploying any kind of workload outside the central infrastructure (such as a Data center or Cloud) has immediate concerns around governance. To address this, it’s important to have an Edge-to-Cloud strategy where the workloads at the Edge are managed through the central Cloud platform. The central platform is also responsible for managing scope, providing updates and being version aware for each of the Edge sites. With CV use cases, there is also a privacy challenge that needs to be managed. The governance capabilities should also support immediate shut offs, audit trail of processes images and other similar capabilities to provide a central control layer to help with privacy issues.


CV use cases at the Edge are poised to take off. This is specifically true in verticals like Manufacturing and Energy, given the presence of enterprise edge compute. There are several factors such as governance and scaling that need to be taken into consideration, but the value demonstrated with CV use cases at the Edge is obvious. These use cases could ultimately power automation and localized feedback loops that would offer significant safety and productivity gains.

About SAS in IoT
SAS empowers organizations to create and sustain business value from diverse IoT data and initiatives, whether that data is at the edge, in the cloud, or anywhere in between. Our robust, scalable, and open edge-to-cloud analytics platform delivers deep expertise in advanced analytics – including AI, machine learning, deep learning, and streaming analytics – to help customers reduce risk and boost business performance. Learn more about our industry and technology solutions at

By: Saurabh Mishra
Sr. Manager, IoT Product Management, SAS