Sandbox wizard introduction - image processing

Previous tutorial: Multiple cameras' views and their rotation


It was already shown before how to add different cameras into the Computer Vision Sandbox and then combined them into views using the notion of sandboxes. However making cameras' views is not the main purpose of sandboxes - it is just one of the things they provide. The more interesting stuff is to do something with the video coming out of the cameras - perform some image enhancement/processing, detect something in it, do some computer vision maybe, or save it at least. This is the task for other plug-ins to perform. Remember the idea is to provide modular solution based on plug-ins - the more plug-ins are added to the system, the more features it can provide. So far it was all about video source plug-ins - those which provide video from one source or another. However let's have a look at another plug-ins type this time - image processing plug-ins.

To apply different image processing to the video coming from cameras it is required to use Sandbox Wizard, which is accessible from project tree's context menu. For every camera added to a sandbox, the wizard allows to specify a video processing graph - sequence of steps to perform for every video frame coming from a camera. Those steps are actually represented by plug-ins of different types, which can be selected from the list of available plug-ins shown in the wizard. For example, the picture below shows a video processing graph with 3 steps configured for one of the cameras of a sandbox:

There are number of image processing plug-ins provided with the Computer Vision Sandbox (and it is supposed to grow) and so combining them into video processing graph may result in different effects - someone may enhance video by increasing its brightness, contrast, etc., or add some non-photorealistic effects, or process images for further detection/recognition tasks, etc. - it all depends on the set of plug-ins available at hand, their sequence and configuration.

Every plug-in available in the system comes with its description, which can be shown by selecting corresponding menu item from plug-ins' context menu (which is available not only from sandbox wizard, but from everywhere else where a list of plug-ins is shown). A plug-in's description usually provides information about what the plug-in does, what are the available properties and their possible values, etc. One of the very useful piece of information provided for image processing plug-ins is the list of acceptable input pixel formats and the list of result output pixel formats. For example, some image processing plug-ins only operate on color images, while some operate only on grayscale images; some accept images of one format and produce an images of different format, while others don't change pixel format as the result of image processing. This information becomes very important when connecting image processing plug-ins into a sequence - if one plug-in produces pixel format which is not supported by the following one, then the video processing graph will not run. If this happens, the video player will show result of the last successful image processing plug-in and a message telling about pixel format mismatch. So be careful with it - read documentation.

You my wonder how to check pixel format of the images coming out of a camera. Documentation says supported pixel formats for image processing plug-ins, but does not say that for video source plug-ins (because it does not always know capabilities of a remote camera). It's very simple to get this information - run a camera, right click on it to get its context menu and then check the camera's info.

Once a video processing graph is done and working, it might be interesting to check its performance. This can be done using Video Processing Information dialog, which provides average timing for each step in the graph, as well as the total time taken for the entire processing. This information might be useful to troubleshoot performance issues, like finding which step is less computationally efficient or finding why video frame rate dropped from 30 to 15, for example, after adding some image/video processing steps, etc. The tool is available from video source's context menu in the running sandbox.

Note: as of 1.2.4 version of Computer Vision Sandbox, the Video Processing Information dialog allows editing properties of image processing filter plug-ins at runtime. Once property is given a new value, its effect can be seen in the running sandbox. Once all changes are done, they can be persisted in the configuration of the sandbox. So next time it runs, the plug-ins will have their new updated properties.


Next tutorial: Video writing - from single file to video archive