«Cameras» Section#
The «Cameras» section is designed to display all cameras, camera statuses, camera previews, and the configuration of video stream parameters for each camera. The camera source can be an RTSP stream, an ANPR camera, or a video file.
CARS_Analytics UI supports working with multiple video stream sources simultaneously.
Only the Administrator can add, update, edit, and delete cameras.
The general view of the «Cameras» section (Figure 72). The column descriptions of the camera table are provided in Table 42.
Table 42. Camera Table Column Descriptions
| Column name | Description |
|---|---|
| Preview | Camera preview image |
| Name | Camera name |
| Type | Video source type |
| Status | Camera status |
| Last Updated | Date and time of the last camera settings update |
| Control Settings | Camera control buttons |
This section contains cameras registered in CARS_Analytics.
The color indicator in the «Status» column shows the camera status. Possible camera statuses are presented in Table 43.
Table 43. Camera Statuses
| Status | Description |
|---|---|
![]() |
The camera is working correctly |
![]() |
No connection with the camera (not responding to CARS_Analytics UI requests) |
![]() |
The camera has been disabled by the administrator, or the camera setup process was not completed |
![]() |
The camera is in the process of connecting |
All interactions with the cameras are done using the buttons located in the camera row (Table 44).
Table 44. Camera Control Buttons
| Button | Description |
|---|---|
![]() |
Restart the camera. Reconnect to the camera using the given parameters |
![]() |
Edit the camera |
![]() |
View the camera's location on the map |
![]() |
View the video stream |
Adding a Camera#
To add a camera, go to the «Create Camera» page after clicking the «Add» button in the «Cameras» section.
The general view of the «Create Camera» page (Figure 73).
The page consists of 3 main blocks:
| Block | Description |
|---|---|
| Connection | Parameters for connecting the source |
| Settings | Camera settings and detection and recognition zone configurations |
| Geoposition | Settings for connecting the camera's geolocation |
Camera Connection#
The camera connection parameters depend on the selected source type.
An example of the camera connection settings in the «Connection» block for the RTSP stream or Video file source type (Figure 74). The description of each field is presented in Table 45.
Each field in the «Connection» block form is mandatory.
Table 45. Description of fields in the camera addition form for RTSP stream or Video file source type
| Name | Description | Values |
|---|---|---|
| Camera Name | Display name in the camera list | Can consist of 1–255 characters, including letters, digits, and symbols. |
| Source type | RTSP stream or Video file. It is recommended to use RTSP stream for production environments; video files can be used for debugging or testing purposes | - RTSP stream |
| - Video file | ||
| CARS_Stream Server Address | Field for selecting the IP address of the server with the CARS_Stream subsystem installed. If the source is in the same network as the CARS_Stream server, it will appear in the dropdown list | Dropdown list with IP addresses of servers with CARS_Stream subsystem |
| CARS_Stream Server Address (manual input) | Field for entering the IP address of the CARS_Stream subsystem server for connecting to the server in an external network | IP address of CARS_Stream |
| Source Location | RTSP stream address or location of the video file. The path to the video file can be specified relative to the location of CARS_Stream on the server | RTSP: rtsp://ip-address/… |
| Video file: /var/lib/luna/cars/video.mp4 | ||
| Data Transmission Protocol | Protocol for video stream transmission | - TCP |
| - UDP | ||
| Camera Operation Switch | Switch for the camera operation | On/Off |
After entering the connection parameters, you need to configure the camera.
An example of the camera connection settings in the «Connection» block for the ANPR camera source type (Figure 75). The description of each field is presented in Table 46.
Each field of the «Connection» block form is required.
Table 46. Description of fields in the camera addition form for ANPR camera source type
| Name | Description | Values |
|---|---|---|
| Name | Display name in the camera list | Can consist of 1–255 characters, including letters, digits, and symbols. |
| Source type | ANPR camera | ANPR camera |
| Username | Username for the ANPR camera | Can consist of 1–255 characters, including letters, digits, and symbols. |
| Password | Password for the ANPR camera | Can consist of 1–255 characters, including letters, digits, and symbols. |
| ANPR Stream Server Address | Field for selecting the IP address of the ANPR Stream server. If the source is in the same network as the ANPR Stream server, it will appear in the dropdown list | Dropdown list with IP addresses of servers with ANPR Stream subsystem |
| ANPR Stream Server Address (manual input) | Field for entering the IP address of the ANPR Stream subsystem server for connecting to the server in an external network | IP address of ANPR Stream |
| Source Location | Field for entering the IP address of the ANPR camera | http://ip-address/… |
| Camera Operation Switch | Switch for the camera operation | On/Off |
After entering the connection parameters, you need to configure the camera.
Camera Settings#
The camera settings block consists of two main subsections: recognition zones management and parameters.
Recognition Zones Management#
This subsection is used for managing detection and recognition zones (Figure 76), as well as movement directions (Figure 77). The description of the settings is provided in Table 47.
Table 47. Description of camera settings
| Name | Description | Values |
|---|---|---|
| Rotation Angle | The rotation angle of the image from the source (step 90°). Used if the incoming video stream is rotated, for example, if the camera is mounted «upside down» | 0, 90, 180, 270 |
| Automatic Restart | Automatic attempt to reconnect to the camera when the connection is lost. The number of attempts and the time interval are set during setup | On/Off |
| Preview | The preview (camera image) is required to apply detection and recognition zones. Without the preview, CARS_Analytics will not function correctly. | Image in the format specified in Table 6 |
| The process of adding a preview from a file: | ||
| 1. Take a snapshot of the video stream in its original resolution | ||
| 2. Click the «Upload preview from file» button | ||
| 3. Select an image in the file explorer | ||
| To add a preview from CARS_Stream, click the corresponding button. After clicking, a «Get Preview» task will be created. Once the task is completed, the preview will be automatically added to the settings | ||
| Edit Detection Zone | The detection zone defines the area of interest processed by CARS_Stream for object detection and tracking | Detection zone applied on the camera preview |
| Add Recognition Zone | The recognition zone defines the area for recognizing vehicle and license plate attributes within the detection zone | Recognition zone applied on the camera preview |
| Set Movement Directions | Allows adding a direction of vehicle movement to match the actual trajectory. When you click «Set Movement Direction», a modal window opens: a vector (arrow from the center of the frame) is drawn, a name is assigned, the course angle and the maximum deviation threshold are set. The movement is considered correct if the minimum angle difference between the vehicle's movement vector and the specified course angle does not exceed the maximum threshold | Name — Can consist of 1–255 characters and include letters, digits, and symbols; |
| Course angle relative to the center of the frame — 0…359°; | ||
| Maximum threshold — 0…90° |
Camera Settings#
This subsection is designed to manage the camera settings and parameters for object detection and recognition.
Changing the camera settings may lead to errors in the operation of the LUNA CARS System. These parameters should only be adjusted by experienced users or under the guidance of VisionLabs engineers.
The camera parameters interface in the «Settings» block for the types of sources RTSP stream or Video file (Figure 78). The description of the settings is provided in Table 48.
Table 48. Description of camera parameters for RTSP stream or Video file source
| Name | Description | Possible Values |
|---|---|---|
| General Settings | Section where general camera settings are defined | — |
| Save LP Detections | Enables saving of LP detection data: coordinates of detected LP plates in the movement history. Data is available when exporting a JSON file generated by CARS_API | On/Off |
| Stream preview mode | Select the live stream transmission mode | - Off - preview mode is off |
| - Simple - preview mode displaying only BBox of detected objects | ||
| - Interactive - preview mode displaying BBox of detected objects and recognized attributes | ||
| Track history size | Parameter limiting the length of stored track history: only the N most recent detections of each vehicle are saved | 1…N. The upper limit may be restricted by system settings |
| Shooting type | Select the type of camera shot | - Close View camera - close-up |
| City surveillance camera - city surveillance camera | ||
| Bird View camera - high-altitude view | ||
| Track detections groups | Section for creating one or more tracking rule sets. Pressing «+» adds a set of fields. Repeatedly pressing «+» creates another set. Unnecessary sets can be deleted using the «trash» icon | — |
| Track area | Detection area from which vehicle intersections are checked | Created zones for the selected camera |
| Intersection policy | Dropdown list determining which area to use for calculating the vehicle intersection percentage: zone (intersection area/zone area), vehicle detection (intersection area/detection area) or a smaller area (intersection area/min(zone area, detection area)) | Using region area; |
| Using detection area; | ||
| Using smaller area | ||
| The functionality of this zone | Dropdown list defining detection accounting policy for the selected area | Allow detections - include all objects in the area |
| Block detections - ignore all objects in the area; | ||
| Block only the creation of new tracks - include already tracked objects but do not start new tracking inside the area | ||
| Intersection | An optional parameter that defines the minimum relative intersection area of the vehicle detection with the «Track area», above which an event/incident is recorded. The higher the threshold, the more accurate the vehicle detection, but this may increase the number of misses. If values less than 0 or greater than 1 are entered, an error will appear. Additionally, if values with more than three decimal places are entered, a warning will appear indicating the two nearest valid values | 0.01…1.00 |
| Classifier Parameters | Section for selecting vehicle and LP attributes for recognition | — |
| Vehicle Classifiers | Dropdown list of available vehicle classifiers for recognizing vehicle attributes. The system will only recognize the selected attributes | - Brand and model |
| Type | ||
| Emergency Service Type | ||
| Color | ||
| Descriptor | ||
| Public Transport Type | ||
| Special Transport Type | ||
| Number of Axles | ||
| Vehicle Orientation | ||
| Vehicle Positioning | ||
| LP Classifiers | Dropdown list of available LP classifiers for recognizing LP attributes. The system will only recognize the selected attributes | LP symbols and country |
| Save Full Frames | Controls saving of full frames | On/Off |
| Save All Best shots | By default, only one best shot, the start and end frames of the track, are saved. This flag activates saving of all best shots detected by the system | On/Off |
| Detector Size | The size of the image in pixels on the largest side, where object detection occurs. Increasing the size allows for better detection of distant objects (or small LP plates on large vehicles), but increases processing time | 320…1280 |
| Detector threshold | The detection threshold for detecting objects. If the object crosses the detection area below the threshold, no detection is registered | 0…1 |
| Redetector Threshold | The threshold for the redetector to detect objects. If the object crosses the detection area below the threshold, no detection is registered | 0…1 |
| FGS is used | Enable FGS technology | On/Off |
| Finder of lost detection | Choose an algorithm for finding lost detections after redetection | - Tracker - the algorithm searches for lost detections by predicting the object’s trajectory; |
| - Detector - the algorithm searches for lost detections by rerunning the detection algorithm on the frame | ||
| Max number of skipped redetects | Maximum number of frames on which the lost detection search can be skipped using the tracker | 0…1000 |
| Max number of skipped redetects by FGS | Maximum number of frames on which the lost detection search can be skipped using FGS if no moving regions are found | 0…1000 |
| Intersection threshold with FGS regions | The threshold value for detection crossing with FGS motion regions. If it exceeds the threshold, the redetector cannot be skipped | 0…1 |
| Model Frequency | FGS model update frequency in frames | 0…1000 |
| Model scale | The number by which the frame will be enlarged for building the FGS model | 0…1 |
| FGS open kernel size | Size of noise areas (in pixels) that must be removed when obtaining areas of the frame where motion occurred | 1…N |
| FGS close kernel size | Size of assumed static background areas (in pixels) that must be removed from foreground objects when obtaining regions with detected motion | 1…N |
| FGS minimum detection | The minimum size of the foreground area returned for the FGS model | 1…N |
| Attributes correction parameters | Section where filtering parameters for vehicle attributes are set to exclude conflicting values, e.g. when both emergency service type and public transport type are recognized for the same vehicle. Thresholds for attribute recognition accuracy are set for these attributes. If the recognition accuracy is below the specified threshold, the attribute is ignored, and the accuracy is set to 0 | On/Off |
| Brand/Model threshold | Recognition threshold for «Vehicle Brand» and «Model» attributes. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Vehicle type threshold | Recognition threshold for «Vehicle Type» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Emergency Vehicle type threshold | Recognition threshold for «Emergency Service» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Special Transport type threshold | Recognition threshold for «Special Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Public Transport type Threshold | Recognition threshold for «Public Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| LP Intersection Threshold | The threshold value for the LP recognition zone crossing | 0…1 |
| AGS Settings for LP | Enable AGS technology. Allows evaluating the quality of LP crop | On/Off |
| Upper threshold | The AGS license plate threshold value, above which the image is considered good | 0…1 |
| Lower threshold | The AGS license plate threshold value, below which the image is considered bad | 0…1 |
| Get information about a vehicle trailer from the stream | Section enabling the retrieval and processing of trailer information from CarStream | On/Off |
| The threshold for the intersection of a car and a trailer | The threshold value for the overlap between vehicle and trailer detections. When the threshold is exceeded, tracks are considered conflicting, and one of the tracks is terminated | 0…1 |
| The threshold for detecting the type of «Trailer» | Recognition threshold for «Trailer» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Settings for Smoke/Fire detector | Section enabling the smoke/fire detector and related parameters | On/Off |
| Min score threshold | Recognition threshold for «Smoke/Fire» detector. Detections with accuracy below the threshold will not be recorded as events | 0…1 |
| Number of blocks in height | Number of vertical divisions in the frame for the detector to work. It is recommended to choose so that the block size in height is at least 150–200 px at the current resolution | 1…N |
| Number of blocks in widt | Number of horizontal divisions in the frame for the detector to work. It is recommended to choose so that the block size in width is at least 150–200 px at the current resolution | 1…N |
Camera parameter interface in the «Settings» block for the ANPR camera source type (Figure 79). The settings are described in Table 49.
Table 49. Description of Camera Settings for the ANPR Camera Source Type
| Name | Description | Possible values |
|---|---|---|
| Classifiers parameters | Section for selecting vehicle and LP attributes for recognition | - |
| Vehicle classifiers | Dropdown list of available vehicle classifiers for recognizing vehicle attributes. The system will recognize only the selected attributes. Excluded attributes will not be recognized. Changing the attribute list affects system performance: the fewer attributes selected, the better the performance | - Brands and models |
| - Vehicle type | ||
| - Emergency service type | ||
| - Color | ||
| - Descriptor | ||
| - Public transport type | ||
| - Special transport type | ||
| - Number of Axles | ||
| - Vehicle orientation | ||
| - Vehicle positioning | ||
| LP classifiers | Dropdown list of available LP classifiers for recognizing LP attributes. The system will recognize only the selected attributes. Excluded attributes will not be recognized. Changing the attribute list affects system performance: the fewer attributes selected, the better the performance | LP symbols and country |
| Save full frames | Controls saving of full frames | On/Off |
| Save all best shots | By default, only one best frame is saved, the frame of the beginning and end of the track. This flag activates the saving of all the best frames received by the system | On/Off |
| Attributes correction parameters | Section for setting filtering parameters for vehicle attributes to exclude conflicting values, e.g., when both emergency service type and public transport type are recognized for the same vehicle. Thresholds for recognition accuracy are set for these attributes. If the recognition accuracy is below the specified threshold, the attribute is ignored, and the accuracy is set to 0. For more information, see Attribute correction parameters in the document «CARS_API. Administrator's Guide» | On/Off |
| Model/Brand threshold | Recognition threshold for «Vehicle Brand» and «Model» attributes. Recognitions below the specified accuracy are not considered in the recognition results | 0.0000…1.0000 |
| Vehicle type threshold | The threshold value of recognition accuracy for the attribute «Vehicle type». Recognitions with an accuracy below the specified one are not taken into account when displaying recognition results | 0.0000…1.0000 |
| Emergency services type Threshold | The threshold value of recognition accuracy for the «Emergency Services» attribute. Recognitions with an accuracy below the specified one are not taken into account when displaying recognition results | 0.0000…1.0000 |
| Special transport type threshold | Recognition threshold for «Special Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0.0000…1.0000 |
| Public transport type threshold | Recognition threshold for «Public Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0.0000…1.0000 |
Camera parameter interface in the «Settings» block for the RTSP buffer source type (Figure 80). The settings are described in Table 50.
Table 50. Description of Camera Settings for the RTSP Buffer source type
| Name | Description | Possible Values |
|---|---|---|
| Stream buffer parameters | Allows configuring how frames are sampled from the input buffer to the output stream: sampling interval (delay in ms) and buffer size (number of stored frames) | - |
| Frame acquire delay in milliseconds | The interval between frame sampling from the RTSP buffer into the output stream. Affects the effective frame rate and delay | 1…100000 |
| Stream buffer size | The number of frames stored in the RTSP buffer for subsequent sampling | 1…1000 |
Classifiers#
Classifier - an object of the CARS_API subsystem designed to recognize specific attributes of a vehicle or a license plate. Classifiers make it possible to identify vehicle or license plate attributes. Results are displayed in the event or incident card.
The table below shows the correspondence between classifiers used in camera settings in the CARS_Analytics interface and the corresponding classifiers of the LUNA CARS_API subsystem. It reflects the mapping of attribute data fields and the LUNA CARS_API classifiers used to determine these attributes.
Table 51. Classifier correspondence table
| Classifier in camera settings | Classifier in LUNA CARS_API |
|---|---|
| Vehicle type | vehicle_type |
| Brands and models | car_brand_model_v2 |
| Vehicle color | detailed_vehicle_color |
| Vehicle emergency type | detailed_vehicle_emergency_v2 |
| Public transport type | public_transport_type |
| Special transport type | special_transport_type |
| Vehicle wheel axles count | vehicle_axles |
| Vehicle descriptor | vehicle_descriptor_v2 |
| Vehicle orientation | vehicle_orientation_v1 |
| Vehicle position | vehicle_position_v1 |
| License plate symbols and country | grz_all_countries |
Camera geolocation#
This section contains fields for entering camera location information (Figure 81).
Filling in these fields will allow the camera’s precise location to be displayed on the map when you click on the corresponding icon in the camera list. By default, each camera will have the coordinates.
Detection Zone#
The detection zone defines the area of interest processed by CARS_Stream for object detection and tracking purposes. It can be configured as a rectangular area within the frame.
To configure and edit the detection zone, click on the "Edit detection zone" button.
CARS_Stream processes only the area within the selected rectangle.
Correctly defining the detection zone significantly improves the performance of LUNA CARS. The detection zone can be used to exclude parts of the frame where object detection and attribute identification (e.g., vehicle and license plate attributes) is unnecessary.
Example of adding a detection zone (Figure 82).
The detection zone is displayed on the preview as a yellow rectangular area (only the selected area will be processed by CARS_Stream).
Editing the Detection Zone#
To edit the detection zone, click on «Edit detection zone».
To change the shape of the detection zone, drag one of the control points located around the perimeter.
To move the detection zone, click on the yellow area and drag it within the preview.
To reset the detection zone, click on the «Full Frame» button.
Below the preview are the geometric parameters of the detection zone:
- Frame size – the size of the full preview frame;
- Detection zone coordinates – the «Х» and «Х» coordinates, with the starting point at the top-left corner of the preview;
- Detection zone width and height.
After completing the detection zone settings, click the «Save» button in the lower-left corner of the window. To cancel adding the zone, press [Esc] on your keyboard or click anywhere outside the form.
If you need to start adding the zone again, click the «Reset» button.
Recognition Zone#
The recognition zone defines the area for event recognition within the detection zone. It is a polygon that can be adjusted freely on the camera preview. CARS_Analytics UI supports the creation of an unlimited number of recognition zones.
The interaction scenarios between objects and the recognition zone are determined by the handler (for more information, see section Handlers).
Object detection and tracking are performed throughout the detection zone, but the best frame is selected only for objects that intersect the recognition zone above the defined threshold. Using the recognition zone allows limiting the area for selecting the best frame without losing track information outside the recognition zone.
The principle of interaction between the object’s BBox and the recognition zone (Figure 83).
Si is the area of the recognition zone (red color), which is covered by the object’s BBox for the i-th frame. The value of Si will be compared with the threshold. The scenario is triggered when the value of Si exceeds the threshold.
When building zones, it is important to consider the direction of the object’s movement around the zone, as the area of intersection between the BBox and recognition zone will differ on different frames and for different movement trajectories.
Example of «trails» of vehicle trajectories (Figure 84).
The final area of intersection between the BBox and the recognition zone is marked in red, and in yellow, the BBox 2. The area of intersection for BBox 1 is larger than that for BBox 2, and the best frame will be selected only for BBox 1. If the threshold value for intersection and movement direction is not set, both BBoxes will be processed by the system.
The list of created recognition zones can be found in the camera settings (Figure 85). Each zone has a color corresponding to the zone's name color.
Creating and Editing a Recognition Zone#
To create a recognition zone, click the «Add Recognition Zone» button.
To edit a recognition zone, click the
button in the row of the required zone. The recognition zone management window will open (Figure 86). The algorithm for creating the recognition area is presented in Table 52.
It is recommended to build recognition zones in the camera's plane without referring to the surrounding environment, as the check for the object’s inclusion in the zone happens with rectangular BBoxes, not arbitrary object shapes. Recommendations for creating zones for industrial use with handler examples are provided in Handlers section.
Table 52. Recognition Zone Creation Algorithm
| № | Description |
|---|---|
| 1 | Create a recognition zone. To create a recognition zone, define the vertices of the polygon, which will form the recognition zone. To create a vertex, click the left mouse button on the camera preview. It is not recommended to create more than 10 vertices. The edges of the polygon should not overlap. The recognition zone should be larger than the BBox of possible detectable objects. |
| 2 | Confirm the creation of the zone – click on the initial vertex of the polygon. |
| 3 | Save the recognition zone – click the «Save» button located under the preview. To reset the created polygon, click the «Start over button. |
| 4 | Enter the recognition zone's name in the «Recognition zone name» field. It can be 1–255 characters long and contain letters, numbers, and symbols. |
| 5 | Select the color for the zone or leave the default color. The color can be selected from the list or by entering the HEX color code. |
| 6 | Finish creating the recognition zone – click the «Save» button at the bottom of the form. To cancel changes, press [Esc] on your keyboard or click anywhere outside the window. |
Deleting Recognition Zone#
To delete a recognition zone, click the
button in the zone's row. A warning will appear (Figure 87), where you must confirm the action by clicking the «Delete» button or cancel the action by clicking the «Cancel» button.
Deletion is not possible if the zone is used in a handler.
Restarting the Camera#
In cases where the camera has been disconnected due to an error (status indicator is red), the administrator can manually restart the camera. The restart button
is located in the camera's row.
When restarting the camera, the connection to the source is re-established using the parameters specified in the camera settings.
Editing the Camera#
To edit the camera, click the
button in the camera's row.
A camera settings form will open, where any parameters can be changed.
Deleting the Camera#
To delete the camera, it can only be done while editing the camera. The delete button is located at the top of the camera page.
Click the «Delete» button, and a warning will appear (Figure 88), where you must confirm the action by clicking the «Delete» button or cancel the action by clicking the «Cancel» button.
Viewing the Camera Location#
To view the camera's location on the map, click the
button in the camera's row.
A Yandex.Maps window will open in the web browser (Figure 89). The window displays the camera’s location marker on the map, along with the coordinates and address corresponding to the coordinates.
The camera's geolocation is tied through the administrator interface, as described in the «CARS_Analytics. Administrator’s Guide».
Viewing the Video Stream#
To view the processed video stream, click the
button in the camera's row.
A window will open in the web browser (Figure 90), where CARS_Stream performs object detection and tracking in real-time within the detection zone in the video stream. The frame boundaries are defined by the detection zone settings.
The video stream viewing mode can be changed or disabled in the camera's parameter settings.
Viewing the processed video stream is used for camera debugging.




