What goes wrong most often when deploying video analytics to reduce false alerts and improve system accuracy?
Frank Crouwel, Managing Director of security technology integrator NW Security Group, takes 4 focus areas where his business sees video analytics not working well because of errors made in design, specification, installation and/or configuration.
In a market study we carried out during May 2021, 93 percent of all medium and large-sized firms in England (all of which were running video monitoring systems supported by video analytics software) reported their video security systems were generating excessive numbers of false alarms. Furthermore, just 25 percent of video security systems decision-makers believe that all of the latest generation of video analytics and intelligent systems are capable of improving accuracy and reducing false alerts.
These two statistics from our recent survey must make worrying reading for video analytics software vendors, especially when you consider one of the key reasons for augmenting video security systems with analytics software is to reduce false alarms and improve system accuracy – making it possible for security teams to respond more effectively, in real-time to security incidents.
It’s really only in the last few years that video analytics has improved to the point where it is capable of consistently adding value to video systems. Before then the only widely used video analytics software was, many argued, video motion detection (VMD) and so-called tripwire analytics, which is often used for perimeter protection and people counting applications for footfall analysis in the retail sector, for example.
Configuring video analytics used to be much more of an art than a science. Analytics tools demanded a lot of resources to configure them in order to gain acceptable results. A tool that might work well in one setting or application, tended not to work effectively for another. You saw a great deal of misapplication of tools which often proved unadoptable.
However, the market has matured considerably over the last few years. Some of this is linked to Moore’s Law which finally made it possible for a wider range of increasingly sophisticated video analytics software programs to be uploaded directly into network cameras. For those not familiar with Moore’s Law, it’s the observation that the number of transistors in a dense integrated circuit board doubles about every two years. This makes it possible to do more and more processing on smaller and smaller devices. It also explains why the typical smartphone today has hundreds of times more compute power than a PC had just a few years ago.
Up to the last few years, most video analytics had to be done on high spec dedicated servers. This rendered it too expensive and cumbersome for most businesses to deploy real-time analytics to help identify security threats or even spot behavioural anomalies which might be early indicators of a crime or misdemeanor. So, video analytics tended to be applied in relatively controlled environments like airports, rail stations, and major road intersections where large budgets were available for addressing safety, security, or high-value operational requirements.
Fast forward to 2021, the use of a wide range of video analytics has become affordable and much more widely available. Of the 103 businesses with more than 50 employees and running a video security or CCTV system using video analytics in some shape or form, contacted by us as part of our recent England-wide market study we found:
- 60% are now using Facial Recognition analytics (mostly on commercial premises to support access control systems)
- 52% are using Event or Behavioural Recognition Analytics
- 50% are using Video Motion Detection
- 50% claim to be using Object Tracking from camera to camera
- 50% use ANPR / LPR
- 48% use Object Detection & Classification
- 47% use Directional Detection
- 46% use Optical Character Recognition (OCR) analytics
However, despite these impressive market penetration numbers, it’s also clear that there is some disquiet out there about the accuracy of the video analytics modules they are using. Our research reveals that a third (33 per cent) of system owners found the language vendors use in sales and marketing literature confusing. And nearly one in three (28 per cent) went further to declare vendors’ literature ‘misleading’ and containing ‘too much over-promising’.
The result: only a quarter of users felt that the latest generation of video analytics solutions are capable of improving accuracy and reducing false alerts. We decided to explore some deeper reasons why the latest generation of video analytics is not seen as delivering security and operational benefits consistently to a higher percentage of system decision-makers today.
There is no doubt that part of the problem is that too many analytics tools are simply switched on as part of a tick box exercise during camera setup. On top of that, there is not sufficient technical know-how or configuration work done to optimise them.
However, deeper problems reveal themselves when you go into the field to inspect established systems and explore why they are often not performing well. 39% of firms with video analytics experienced false alerts due to the location or positioning of cameras. 29% from poor lighting of cameras’ field of view and 27% saw excessive false alarms due to incorrectly specified or configured video analytics software, our study found.
Many of the problems highlighted in our study are due to system errors or oversights in design, installation and/or configuration. Here are just four areas where it can go wrong (there are many more) below:
1. Beware Built-in Camera Lighting
In our study, 4 out of 10 (41%) video system decision-makers reported obstructions on their CCTV cameras such as dirt or insects causing false alerts – the single most widespread cause of false alarms. A good many of these are likely to be because they are using the built-in lighting which then draws in insects to obstruct the view and set off video analytics alerts.
Day/night cameras today often have built-in infrared (IR) lighting which can be a problem at night because the heat from the LEDs in these cameras attracts insects which can then obscure the field of view and set off false alarms. It’s far better to use IR lighting separate from the cameras, so they provide the right level of lighting for the area you need to cover, and fewer insects are drawn close to the camera.
2. ANPR installs demand focus on angles, lighting, speed of vehicle
One of the first video analytics software installs which we saw right across the country was Automatic Number Plate Recognition (ANPR) or License Plate Recognition (LPR). We saw ANPR being demanded in secure warehouses, depots, schools, offices in large numbers. They are generally used to ensure only pre-identified, pre-registered vehicles could get into closed sites where high-value equipment was stored or loading bays were located.
However, we often found ourselves having to put companies’ ANPR installs right following incorrect installation or configuration. As with many video analytics, a camera with ANPR software needs to be installed correctly i.e., at the right height, the right angle and the right distance from the vehicle, taking into account the speed of the vehicle when the plate is to be captured, as well as environmental conditions such as rain or snow and camera imaging performance itself in order to deal with bright headlights and reflection of nearby objects.
For example, we have found problems with plate-reading accuracy where rain puddles build up between the ANPR camera and the vehicle whose number plate the camera needs to read. If the camera is not positioned and specified correctly and lit correctly, your ANPR camera might pick up a distorted reflection of number plates in that puddle instead of the direct view of the plate on the front of the vehicle – creating an alarm as the number plate logged by the system does not match from the list of pre-authorised vehicles.
3. Don’t expect too much from one camera
One common expectation we see amongst end-users is that a camera installed for general purpose security observation of a wide area, is also assumed to be capable of performing a specific function through video analytics. This is often not the case.
More often than not video analytics in a camera will only work well if the camera is set up for the specific purpose of the video analytic function. For example, going back to the ANPR setup, a camera overlooking a car park will not accurately read number plates entering or exiting a car park all of the time.
A camera with the ANPR application will need to be installed at the car park entrance/exit and focused specifically on the in and out lanes, exclusively for the purpose of reading number plates. Therefore, at least two cameras will be needed to achieve the required outcomes and accuracy in this instance. This simple example equally applies to the vast majority of other video analytics.
4. Inaccurate Object Detection, Object Classification & Facial Recognition
The US-based industry analyst and renowned tester of surveillance equipment IPVM, has created a very detailed report focused on analytics precision . The analyst’s definition of the problem of defining precision is instructive:
“Precision represents how well the algorithm finds objects correctly but does not take into account objects that it misses (False Negatives), or correctly ignores (True Negatives). Facial recognition access control systems require high precision, not granting access to the wrong person, however, missed recognitions (False Negatives) are frustrating and significantly impact users.”
Be careful when selecting object detection analytics. You only have to read some video analytics reviews to see how common incorrect categorisation of objects is. Facial recognition analytics where faces are being matched to ‘watch lists’ in public spaces or lists of authorised employees inside company buildings, is still inaccurate in many cases. Very careful design and configuration work, ideally set up in highly controlled environments such as in airports, must be completed prior to installation to ensure accuracy reaches acceptable levels.
Heuristic analytics’ primary weakness is that they are limited to detecting features or variables that are hard-coded by humans. Heuristic analytics are prone to misclassifications when objects do not meet the pre-set expectations. For example, a person crawling on the ground wearing evenly-coloured clothing might be classified as a vehicle or animal instead of a person, given their anomalous aspect ratio and uniform colouring.
With object detection and qualification, it should be quite clear where the limitations of a chosen video analytic product lie (providing the vendor is upfront about this). It is worth asking for detailed information about this prior to purchasing.
Our study shows that there is a mismatch between end users’ expectations and their real-life experiences with video analytics. We don’t put this down to video analytics products not working but more to design, installation, and configuration of video analytics that needs to be improved in the field.
Many end-users in our study reported finding video analytics vendors’ marketing literature confusing. Others stated categorically that video analytics vendors were over-promising in their sales literature. This sort of miscommunication does not help end-users to begin their adoption of video analytics with the right expectations.
It’s certainly not helped by the elastic definition of an analytics algorithms’ precision and by the range of different algorithms in use today – some of which work well for one application but poorly for another.
Precision in video analytics remains a key problem for video analytics usage today. There are also trade-offs to be made between different types of analytics. For example, strongly performing deep learning algorithms may be more accurate than heuristics for many applications but they demand more compute power and may therefore be too expensive to use in certain instances.
One key rule of thumb for video analytics applications and indeed all your CCTV system installations is to properly consider the purpose and operational requirements of any new monitoring or surveillance that you are planning to enable.
Far too often, installs are driven by coverage. In other words, this camera is capable of covering that field of view. It may technically be able to ‘cover’ all the gates at the entrance of a stadium, for example, but can a behavioural video analytic deployed on that camera accurately spot dangerous crowd density levels building up or help spot the epicentre of a fight breaking out?
Video systems and their accompanying video analytics must be designed and configured with that safety or security goal in mind first. Without that hard focus on the operational requirements, there is always a risk that system owners and users will be lumbered with a system which is putting out too many false alarms and failing to do its job when incidents happen.
Suffice to say, such is the complexity of the issue of selecting the right video analytics software, married with the right hardware, installed and configured correctly to generate optimum results and eliminate false alerts, that it’s worth calling in an expert to assist in working your way through the minefield of decisions which need to be taken to make sure the video analytics solutions you need is delivered optimally.Explore our Services