Controlling overloaded video processing graph

Previous tutorial: Image statistics


The video processing graphs available in sandbox wizard allow adding any number of steps to perform different sorts of manipulations on the incoming video. Those steps can be represented as by different image/video processing plug-ins combined sequentially, as by scripting plug-ins, which allow defining more complex and dynamic video manipulations. However what if the total image processing done on the video frames takes more time than the frame interval time? For example, a camera may provide 30 frames a second, which is a frame every ~0.33 seconds. If handling/processing of these frames takes more time however, then video source may get blocked and it will not be able to provide video frames at the pace it would like to.

How bad is having a blocked video source and what can be result of it? Well, it really depends on the video source and its implementation. In the simplest case video frames will get a bit delayed (depending on how heavy the performed image processing is, etc.) and nothing more - video source will keep acquiring new video frames when control gets back to it. However another video source may do something more advanced and start queueing video frames if the client application is not fast enough to consume them at the required rate. In this case the frame delays may become significant and obviously noticeable (up to couple of seconds, which is no way good for any real time video processing). More of it, it may affect not only the Computer Vision Sandbox application, but performance of the entire system as well. Personally I had a chance to work with some USB cameras with poor drivers, which can almost freeze the system while queueing video frames. We don't want that for sure, so better avoid delaying video source for too long.

The description above does not go deep into details of what actually happens in the Computer Vision Sandbox, but is enough to get the main idea - blocking video source may not be desirable in most cases. So our goal here is to find how to detect if video source gets blocked. The sandbox wizard introduction tutorial already demonstrated a tool, which may help troubleshooting performance issues of video processing graphs. The tool shows the average time taken by each step of a graph and the average time taken by the entire graph. The graph time may already serve as a good indicator of potential issues - if it is greater than the frame interval of the video source it is applied to, then the video source may get blocked and the new video frames arriving from it will get delayed before they are processed.

Although the graph time value provides the idea of the performed video processing performance, it still does not tell explicitly if the new video frames get delayed before they get into video processing graph. The reason for this is the fact that there is actually more happening in the background. To get a rough idea, here is a bit more of the details. Each video source runs in its own background thread - the thread which interacts with a particular camera, using particular APIs, etc. When a new frame is available, it notifies client application about it using callback functions, interfaces or whatever notification API is used. Computer Vision Sandbox application of course does not want to handle the new frame in the tread of the video source. Instead it just copies image data from the memory buffer of the video source, sets a signal that a new video frame is ready to be processed and returns back to the video source letting it to continue with acquisition of the following video frame. Now the raised signal is detected by a video processing thread (one per video source) and so it starts its job - applies the configured video processing graph to the arrived image. When all processing is done, it notifies UI part, which now makes a copy of the processed frame and then performs rendering in the application's UI thread. The user interface thread is not something we care to much here now, but it also may introduce a tiny delay, since copying image data into data structures shared by multiple threads must be synchronized.

Now the above gets closer to the real picture. It shows that the video processing graph, for which we have timing, is not the only thing done on the way of video frames from their sources to the display. So even if we see the time taken by video processing graph is slightly less than frame interval of the video source, it does not guarantee nothing gets blocked/delayed. And so the Computer Vision Sandbox version 1.2.1 introduces an extra indicator, which tells for sure if arriving video frames get delayed before getting into video processing graph. As it was mentioned above, new frame notification happens in a video source thread. To make a copy of the incoming image and pass it to video processing thread, we need to be sure that video processing thread is free and ready to process the next frame. And for this purpose another signal is used to indicate the processing thread status. If the signal is raised, we can feed new image into the video processing thread. If it is not raised, then we need to wait for the thread to get free. So here is how delayed frames happen.

To report about the delayed frames, the 1.2.1 version extends the video processing information tool and shows the number of delayed frames - the frames which at the time of their arrival found the video processing thread is still busy to accept them. If there are no frames delayed at all, than performance issue message is not shown at all. However if is shown and the number of delayed frames keeps increasing, then we have a video processing performance issue to investigate.

What should be done if there are delayed frames? Solution may be different depending on what is the goal of performed video processing and overall performance. In some cases the entire video processing takes only few milliseconds longer than the frame interval - just enough to detect processing thread is still busy. But at the same time video frame rate does not drop and the system is capable to perform the required video processing. In this case the delayed frames issue may be ignored at all. However in many other cases the delay may become noticeable and potentially lead to system performance issues. In this case something may need to be done. One option may be to configure video source to provide lower frame rate. Another option may be simply to reduce amount of the performed video processing. However in some cases none of these is achievable (all processing needs to be done and the video source may not provide option to control frame rate). In this case a potential solution might be simply drop the incoming frames, if video processing thread is still busy. So if application gets a new frame, while the previous frame is still being processed, it will throw the frame away and continue acquiring new frame hoping that processing thread will become free when the next frame becomes available. This option is available for configuration on the settings page of sandbox properties.

Once sandbox is configured to drop frames when video processing takes too much time, the message indicating performance issue will change to reflect the fact that application is dropping video frames now instead of delaying them. Also the reported frame rate will drop, since now application no longer tries to process everything received from video source.


Next tutorial: Video repeaters