Skip to content

CARS Stream Configuration#

This section will describe how to configure CARS Stream.

The configuration process must be performed under the superuser account (with root rights).

Configuration files#

The list of configuration files is presented in Table 4.

Table 4. Configuration files

Name Path Description Comment
csConfig.conf /bin/data/ Configuration file with general parameters of CARS Stream. Appendix 2
TrackEngine.conf /bin/data/ TrackEngine library settings. Appendix 3
vehicleEngine.conf /bin/data/ Parameters of object detectors. Appendix 4
runtime.conf /bin/data/ Launch options for CARS Stream. Appendix 5
Input.json /bin/data/ Source options See 4.4
faceengine.conf /bin/data/ Parameters of human detector. Appendix 6

Each parameter has its own editing recommendation (Appendix 1).

Detectors#

This section provides information about vehicle, LP, smoke, fire, pedestrian and animal detectors.

Information about detectors performance testing is provided in section 7.

Vehicle detector#

Vehicle detectors are designed for detection, redetection and tracking of vehicles on video streams and video files.

The list of vehicle detectors is given in Table 5.

Table 5. Vehicle detector description

Name Description
VehicleDetectorV4 Vehicle detector of the latest version that allows you to obtain information about the position of the vehicle in the image. Detector has a number of advantages:
- Improved performance;
- Increased speed and accuracy;
- The presence of an algorithm for redetection of vehicle to improve the accuracy:
- Birdview images support;
- Fixed issue of false positive detections for cameras set on houses.
This version includes additional parameters configurations for vehicle and LP detectors.

Detection fields received via vehicle detectors are described in Table 6.

Table 6. Vehicle detection fields description

Field Type Description Possible values
detection An array containing the coordinates and size of the detections of each vehicle in the image, as well as an estimate of the accuracy of vehicle detection The list of detected vehicles, each detection includes 5 fields: height, score, width, x, y.
execution_time int Execution time in milliseconds -
height int BBox height 0…1080
score float Estimation of the accuracy of the detection of vehicle 0.0000…1.0000
width int BBox width 0…1920
x int Horizontal coordinate of the upper-left corner of BBox 0…1920
y int Vertical coordinate of the upper-left corner of BBox 0…1080
detector string Detector’s type name car

System response example within vehicle detection:

{
    "detections": [
        {
            "height": 298,
            "score": 0.9394,
            "width": 514.006,
            "x": 0,
            "y": 0
        }
    ],
    "detector": "car",
    "execution_time": 153
}

LP detector#

The list of LP detectors is given in Table 7.

Table 7. LP detector description

Name Description
PlateDetectorV5 The latest version of the license plate detector, which allows you to get information about the position of license plates on the image. It has a number of changes relative to previous versions:
- Improved accuracy;
- Now we can detect multiple license plates on the same vehicle;
- Detection of relevant license plate number (related to the desired vehicle);
- Reduced speed relative to PlateDetectorV4.

Detection fields received via vehicle detectors are described in Table 8.

Table 8. LP detection fields description

Field Type Description Possible values
detection An array containing the coordinates and size of the detections of each LP in the image, as well as an estimate of the accuracy of LP detection The list of detected LPs, each detection includes 5 fields: height, score, width, x, y
execution_time int Execution time in milliseconds -
height int BBox height 0…1080
score float Estimation of the accuracy of the detection of LP 0.0000…1.0000
width int BBox width 0…1920
x int Horizontal coordinate of the upper-left corner of BBox 0…1920
y int Vertical coordinate of the upper-left corner of BBox 0…1080
detector string Detector’s type name grz

System response example within LP detection:

{
    "detections": [
        {
            "height": 40,
            "score": 1,
            "width": 72,
            "x": 413,
            "y": 217
        }
    ],
    "detector": "grz",
    "execution_time": 252
}

Pedestrian detector#

Pedestrian detector is designed for detection, redetection and tracking of pedestrians on multimedia files.

License is required for the detector to work (see «LUNA CARS. Installation Guide»).

The description of detector is given in Table 9.

Table 9. Pedestrian detector description

Name Description
HumanDetector The detector that allows you to obtain information about the location and position of the pedestrian in the sequence of frames

Animal detector#

Animal detector is designed for detection, redetection and tracking of animals on multimedia files.

License is required for the detector to work (see «LUNA CARS. Installation Guide»).

The description of detector is given in Table 10.

Table 10. Animal detector description

Name Description
AnimalDetectorV1 The detector that allows you to obtain information about the location and position of the animal in the sequence of frames

Smoke and fire detector#

Smoke and fire detector is designed to detect fires on video streams and video files.

The description of detector is given in Table 11.

Table 11. Smoke and fire detector description

Name Description
smokeFireDetectorV1 The detector allows you to obtain smoke and/or fire detection information on a video stream or video file

Frame Processing Strategies#

Frame processing strategies are used to select the best shot of the object.

Each new strategy has the advantage of speeding up the detection and redetection processes, as well as the accuracy of recognition.

Following strategies are implemented in the CARS Stream:

  • Common;
  • Redetect;
  • DROI;
  • DROIFgs;
  • LpDROI;
  • Coroutine;
  • CoroutineV2.

Strategies, parameters of the best shots and tracks are configured in the TrackEngine.conf configuration file. The current strategy for working in the latest version of the system is specified in the configuration file TrackEngine.conf. Changing the default strategy is not recommended.

Common#

This strategy is based on the detection of objects in full frame. A strategy starts for each video stream and determines the BBox of objects of interest on the entire frame.

The strategy includes full frame redetection once in several frames and then an object track is compiled of the detected objects BBox on each frame.

Redetect#

The Redetect strategy is implemented based on the Common strategy. After detecting objects on the full frame and determining the BBox of objects, the Redetect strategy performs a subsequent redetection to compile an object track only within the detected BBoxes for each object. This reduces the time and resources spent on frames processing.

DROI#

As part of this strategy, the region for intersection (DROI) and the threshold value of the intersection of the BBox object into the DROI area are set on the frame. After that, when the BBox object enters the DROI area (i.e., when the BBOX crosses the DROI area by a value equal to or greater than the threshold), the system tracks the location of this BBox in the DROI area.

The region for intersection (DROI) is the area of interest on the source frame of the video stream or video file, in which the best frame is selected and the track of the object is formed. The region for the intersection is set in the CARS Analytics interface. Detection and tracking of the object is performed on the whole original frame.

All geometric parameters are specified in pixels. Geometric parameters include:

  • х – horizontal coordinate of the upper left point of BBox or DROI;
  • y – vertical coordinate of the upper left point of BBox or DROI;
  • width – width of BBox or DROI;
  • height – height of BBox or DROI.

The generation of frame lists within one track is possible only for vehicles that have BBox intersection with the DROI. Crossing areas are highlighted in red color (Figure 3).

Schematic representation of the object following through the DROI area on a sequence of frames
Figure 3. Schematic representation of the object following through the DROI area on a sequence of frames

Frames with detection received within the same track are sequentially saved to the buffer. When the buffer overflows, the algorithm for determining the best shot compares the parameters of the new frame with the old ones:

  • BBox size in pixels;
  • Lack of vehicle overlaps by other objects.

At the end of the track, the algorithm for determining the best shot selects the best shot with the least overlap and the largest BBox size.

DROIFgs#

This strategy is based on the DROI strategy using the FGS algorithm. If there is no movement in the detection zone, the strategy is not started and is not using system resources.

LpDROI#

This strategy is based on the DROI strategy to detect and select the best shot only of the LP. The system uses only the BBox of LP that has an intersection with a given detection area for further work.

Coroutine#

The Coroutine strategy combines the algorithms of all previous strategies, while significantly reducing the usage of time and system resources for frame processing due to the simultaneous processing of several video streams.

When several video streams are running simultaneously and frame processing is required for each of them, the Coroutine strategy collects the full frames of each of the streams and runs once for several streams.

The processing of frames and the compiling of object tracks is performed according to the algorithms of the DROIFgs and LpDROI strategies.

CoroutineV2#

An updated version of the Coroutine strategy, which includes several different algorithms for finding the best detections and selecting the best frames. The algorithm may be selected in CARS Analytics UI.

Source Configuration#

Sources are configured using the /data/input.json configuration file.

Video files#

CARS Stream supports working with video files. Video files are added in the «video-sources» block. Description of parameters is given in Table 11.

Table 11. Description of «video-sources» parameters

Parameters Description Possibly values Default value
name Source name. The name for each video file must be unique Latin characters, numbers 0…9 and symbols ".", "_", "-" video_id_0
roi Region for intersection coordinates in pixels. Region for intersection defines the area of interest processed by CARS Stream for detection and tracking of objects. Given by four coordinates [x, y, w, h]: Any value from 0 to the resolution of a specific video file 0, 0, 0, 0
x - horizontal coordinate of the upper left corner of the Region for intersection;
y - vertical coordinate of the upper left corner of the Region for intersection;
w - Region for intersection width;
h - heights of the Region for intersection
rotation The rotation angle of the video file. It is used when the incoming video stream is rotated, for example, if the camera is installed upside down 0, 90, 180, 270 0
transport Video streaming protocol UDP or TCP TCP
url The path to the video file. The path can be absolute or relative Any location accessible from the CARS Stream directory /media/storage/ example_video1.avi

To connect several video files, you need to add an object in the video-sources block with a unique name. The number of connected video files is unlimited.

Video stream#

Video streams are managed using API requests.

Detailed description of requests see in "/docs/stream/rus/rest_api/CarStreamServerApi.html"