«Cameras» Section#
The «Cameras» section is designed to display all cameras, camera statuses, camera previews, and the configuration of video stream parameters for each camera. The camera source can be an RTSP stream, an ANPR camera, or a video file.
CARS_Analytics UI supports working with multiple video stream sources simultaneously.
Only the Administrator can add, update, edit, and delete cameras.
The general view of the «Cameras» section (Figure 91). The column descriptions of the camera table are provided in Table 52.
Table 52. Camera Table Column Descriptions
| Column name | Description |
|---|---|
| Preview | Camera preview image |
| Name | Camera name |
| Type | Video source type |
| Status | Camera status |
| Last Updated | Date and time of the last camera settings update |
| Control Settings | Camera control buttons |
This section contains cameras registered in CARS_Analytics.
The color indicator in the «Status» column shows the camera status. Possible camera statuses are presented in Table 53.
Table 53. Camera Statuses
| Status | Description |
|---|---|
![]() |
The camera is working correctly |
![]() |
No connection with the camera (not responding to CARS_Analytics UI requests) |
![]() |
The camera has been disabled by the administrator, or the camera setup process was not completed |
![]() |
The camera is in the process of connecting |
All interactions with the cameras are done using the buttons located in the camera row (Table 54).
Table 54. Camera Control Buttons
| Button | Description |
|---|---|
![]() |
Restart the camera. Reconnect to the camera using the given parameters |
![]() |
Edit the camera |
![]() |
View the camera's location on the map |
![]() |
View the video stream |
Adding a Camera#
To add a camera, go to the «Create Camera» page after clicking the «Add» button in the «Cameras» section.
The general view of the «Create Camera» page (Figure 92).
The page consists of 3 main blocks:
| Block | Description |
|---|---|
| Connection | Parameters for connecting the source |
| Settings | Camera settings and detection and recognition zone configurations |
| Geoposition | Settings for connecting the camera's geolocation |
Camera Connection#
The camera connection parameters depend on the selected source type.
An example of the camera connection settings in the «Connection» block for the RTSP stream or Video file source type (Figure 93). The description of each field is presented in Table 55.
Each field in the «Connection» block form is mandatory.
Table 55. Description of fields in the camera addition form for RTSP stream or Video file source type
| Name | Description | Values |
|---|---|---|
| Camera operation switch | Switching camera operation | On/off |
| Camera Name | Display name in the camera list | Can consist of 1–255 characters, including letters, digits, and symbols. |
| Source type | RTSP stream or Video file. It is recommended to use RTSP stream for production environments; video files can be used for debugging or testing purposes | - RTSP stream |
| - Video file | ||
| CARS_Stream Server Address | Field for selecting the IP address of the server with the CARS_Stream subsystem installed. If the source is in the same network as the CARS_Stream server, it will appear in the dropdown list | Dropdown list with IP addresses of servers with CARS_Stream subsystem |
| CARS_Stream Server Address (manual input) | Field for entering the IP address of the CARS_Stream subsystem server for connecting to the server in an external network | IP address of CARS_Stream |
| Source Location | RTSP stream address or location of the video file. The path to the video file can be specified relative to the location of CARS_Stream on the server | RTSP: rtsp://ip-address/… |
| Video file: /var/lib/luna/cars/video.mp4 | ||
| Data Transmission Protocol | Protocol for video stream transmission | - TCP |
| - UDP |
After entering the connection parameters, you need to configure the camera.
An example of the camera connection settings in the «Connection» block for the ANPR camera source type (Figure 94). The description of each field is presented in Table 56.
Each field of the «Connection» block form is required.
Table 56. Description of fields in the camera addition form for ANPR camera source type
| Name | Description | Values |
|---|---|---|
| Camera Operation Switch | Switch for the camera operation | On/Off |
| Name | Display name in the camera list | Can consist of 1–255 characters, including letters, digits, and symbols. |
| Source type | ANPR camera | ANPR camera |
| Username | Username for the ANPR camera | Can consist of 1–255 characters, including letters, digits, and symbols. |
| Password | Password for the ANPR camera | Can consist of 1–255 characters, including letters, digits, and symbols. |
| ANPR Stream Server Address | Field for selecting the IP address of the ANPR Stream server. If the source is in the same network as the ANPR Stream server, it will appear in the dropdown list | Dropdown list with IP addresses of servers with ANPR Stream subsystem |
| ANPR Stream Server Address (manual input) | Field for entering the IP address of the ANPR Stream subsystem server for connecting to the server in an external network | IP address of ANPR Stream |
| Source Location | Field for entering the IP address of the ANPR camera | http://ip-address/… |
After entering the connection parameters, you need to configure the camera.
Camera Settings#
The camera settings block consists of two main subsections: recognition zones management and parameters.
Recognition Zones Management#
This subsection is used for managing detection and recognition zones (Figure 95), as well as movement directions (Figure 96). The description of the settings is provided in Table 57.
Table 57. Description of camera settings
| Name | Description | Values |
|---|---|---|
| Automatic Restart | Automatic attempt to reconnect to the camera when the connection is lost. The number of attempts and the time interval are set during setup | On/Off |
| Grid columns | Number of vertical divisions in the frame for the detector to operate | Number of blocks vertically |
| Grid rows | Number of horizontal divisions in the frame for the detector to operate | Number of blocks horizontally |
| Preview | The preview (camera image) is required to apply detection and recognition zones. Without the preview, CARS_Analytics will not function correctly. | Image in the format specified in Table 6 |
| The process of adding a preview from a file: | ||
| 1. Take a snapshot of the video stream in its original resolution | ||
| 2. Click the «Upload preview from file» button | ||
| 3. Select an image in the file explorer | ||
| To add a preview from CARS_Stream, click the corresponding button. After clicking, a «Get Preview» task will be created. Once the task is completed, the preview will be automatically added to the settings | ||
| Edit Detection Zone | The detection zone defines the area of interest processed by CARS_Stream for object detection and tracking | Detection zone applied on the camera preview |
| Add Recognition Zone | The recognition zone defines the area for recognizing vehicle and license plate attributes within the detection zone | Recognition zone applied on the camera preview |
| Set Movement Directions | Allows adding a direction of vehicle movement to match the actual trajectory. When you click «Set Movement Direction», a modal window opens: a vector (arrow from the center of the frame) is drawn, a name is assigned, the course angle and the maximum deviation threshold are set. The movement is considered correct if the minimum angle difference between the vehicle's movement vector and the specified course angle does not exceed the maximum threshold | Name — Can consist of 1–255 characters and include letters, digits, and symbols; |
| Course angle relative to the center of the frame — 0…359°; | ||
| Maximum threshold — 0…90° |
Camera Parameters#
This subsection is designed to manage the camera settings and parameters for object detection and recognition.
Changing the camera settings may lead to errors in the operation of the LUNA CARS System. These parameters should only be adjusted by experienced users or under the guidance of VisionLabs engineers.
The camera parameters interface in the «Settings» block for the types of sources RTSP stream or Video file (Figure 97). The description of the settings is provided in Table 58.
Table 58. Description of camera parameters for RTSP stream or Video file source
| Name | Description | Possible Values |
|---|---|---|
| General Settings | Section where general camera settings are defined | — |
| Save LP Detections | Enables saving of LP detection data: coordinates of detected LP plates in the movement history. Data is available when exporting a JSON file generated by CARS_API | On/Off |
| Stream preview mode | Select the live stream transmission mode | - Off - preview mode is off |
| - Simple - preview mode displaying only BBox of detected objects | ||
| - Interactive - preview mode displaying BBox of detected objects and recognized attributes | ||
| Track history size | Parameter limiting the length of stored track history: only the N most recent detections of each vehicle are saved | 1…N. The upper limit may be restricted by system settings |
| Shooting type | Select the type of camera shot | - Close View camera - close-up (by default) |
| City surveillance camera - city surveillance camera | ||
| Bird View camera - high-altitude view | ||
| Control point | ||
| FGS is used | Enable FGS technology | On/Off |
| Track detections groups | Section for creating one or more tracking rule sets. Pressing «+» adds a set of fields. Repeatedly pressing «+» creates another set. Unnecessary sets can be deleted using the «trash» icon | - |
| Track area | Detection area from which vehicle intersections are checked | Created zones for the selected camera |
| Intersection policy | Dropdown list determining which area to use for calculating the vehicle intersection percentage: zone (intersection area/zone area), vehicle detection (intersection area/detection area) or a smaller area (intersection area/min(zone area, detection area)) | Using region area; |
| Using detection area; | ||
| Using smaller area | ||
| The functionality of this zone | Dropdown list defining detection accounting policy for the selected area | Allow detections - include all objects in the area |
| Block detections - ignore all objects in the area; | ||
| Block only the creation of new tracks - include already tracked objects but do not start new tracking inside the area | ||
| Intersection | An optional parameter that defines the minimum relative intersection area of the vehicle detection with the «Track area», above which an event/incident is recorded. The higher the threshold, the more accurate the vehicle detection, but this may increase the number of misses. If values less than 0 or greater than 1 are entered, an error will appear. Additionally, if values with more than three decimal places are entered, a warning will appear indicating the two nearest valid values | 0.01…1.00 |
| Classifier Parameters | Section for selecting vehicle and LP attributes for recognition | - |
| Vehicle Classifiers | Dropdown list of available vehicle classifiers for recognizing vehicle attributes. The system will only recognize the selected attributes | - Brand and model |
| Type | ||
| Emergency Service Type | ||
| Color | ||
| Descriptor | ||
| Public Transport Type | ||
| Special Transport Type | ||
| Number of Axles | ||
| Vehicle Orientation | ||
| LP Classifiers | Dropdown list of available LP classifiers for recognizing LP attributes. The system will only recognize the selected attributes | LP symbols and country |
| Save Full Frames | Controls saving of full frames | On/Off |
| Save All Best shots | By default, only one best shot, the start and end frames of the track, are saved. This flag activates saving of all best shots detected by the system | On/Off |
| Vehicle detector parameters | Section for configuring the vehicle detector | On/Off |
| Detector Size | The size of the image in pixels on the largest side, where object detection occurs. Increasing the size allows for better detection of distant objects (or small LP plates on large vehicles), but increases processing time | 320…1280 |
| Detector threshold | The detection threshold for detecting objects. If the object crosses the detection area below the threshold, no detection is registered | 0…1 |
| Redetector Threshold | The threshold for the redetector to detect objects. If the object crosses the detection area below the threshold, no detection is registered | 0…1 |
| License plate detector parameters | Section for setting up the LP detector | - |
| Detector threshold | The detection threshold for detecting objects. If the object crosses the detection area below the threshold, no detection is registered | 0…1 |
| Redetector Threshold | The threshold for the redetector to detect objects. If the object crosses the detection area below the threshold, no detection is registered | 0…1 |
| Redetect parameters | Section for configuring the redector | - |
| Finder of lost detection | Choose an algorithm for finding lost detections after redetection | - Tracker - the algorithm searches for lost detections by predicting the object’s trajectory; |
| - Detector - the algorithm searches for lost detections by rerunning the detection algorithm on the frame | ||
| Max number of skipped redetects | Maximum number of frames on which the lost detection search can be skipped using the tracker | 0…1000 |
| Max number of skipped redetects by FGS | Maximum number of frames on which the lost detection search can be skipped using FGS if no moving regions are found | 0…1000 |
| Intersection threshold with FGS regions | The threshold value for detection crossing with FGS motion regions. If it exceeds the threshold, the redetector cannot be skipped | 0…1 |
| FGS parameters | Section for setting up FGS | - |
| Model Frequency | FGS model update frequency in frames | 0…1000 |
| Model scale | The number by which the frame will be enlarged for building the FGS model | 0…1 |
| FGS open kernel size | Size of noise areas (in pixels) that must be removed when obtaining areas of the frame where motion occurred | 1…N |
| FGS close kernel size | Size of assumed static background areas (in pixels) that must be removed from foreground objects when obtaining regions with detected motion | 1…N |
| FGS minimum detection | The minimum size of the foreground area returned for the FGS model | 1…N |
| Attributes correction parameters | Section where filtering parameters for vehicle attributes are set to exclude conflicting values, e.g. when both emergency service type and public transport type are recognized for the same vehicle. Thresholds for attribute recognition accuracy are set for these attributes. If the recognition accuracy is below the specified threshold, the attribute is ignored, and the accuracy is set to 0 | On/Off |
| Brand/Model threshold | Recognition threshold for «Vehicle Brand» and «Model» attributes. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Vehicle type threshold | Recognition threshold for «Vehicle Type» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Emergency Vehicle type threshold | Recognition threshold for «Emergency Service» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Special Transport type threshold | Recognition threshold for «Special Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Public Transport type Threshold | Recognition threshold for «Public Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| LP Intersection Threshold | The threshold value for the LP recognition zone crossing | 0…1 |
| AGS Settings for LP | Enable AGS technology. Allows evaluating the quality of LP crop | On/Off |
| Upper threshold | The AGS license plate threshold value, above which the image is considered good | 0…1 |
| Lower threshold | The AGS license plate threshold value, below which the image is considered bad | 0…1 |
| Get information about a vehicle trailer from the stream | Section enabling the retrieval and processing of trailer information from CarStream | On/Off |
| The threshold for the intersection of a car and a trailer | The threshold value for the overlap between vehicle and trailer detections. When the threshold is exceeded, tracks are considered conflicting, and one of the tracks is terminated | 0…1 |
| The threshold for detecting the type of «Trailer» | Recognition threshold for «Trailer» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0…1 |
| Settings for Smoke/Fire detector | Section enabling the smoke/fire detector and related parameters | On/Off |
| Min score threshold | Recognition threshold for «Smoke/Fire» detector. Detections with accuracy below the threshold will not be recorded as events | 0…1 |
| Settings for Sabotage detector | Section that includes the use of the «Sabotage» detector and related parameters | - |
| Max trigger frequency | This setting defines the frequency of the «Sabotage» handler's activation. The value can be set from 0 to N, where 0 means no limitation, and N is the maximum interval in frames between handler activations. The higher the value, the less frequently the frame change check will occur | 0…N |
Camera parameter interface in the «Settings» block for the ANPR camera source type (Figure 98). The settings are described in Table 59.
Table 59. Description of Camera Settings for the ANPR Camera Source Type
| Name | Description | Possible values |
|---|---|---|
| Classifiers parameters | Section for selecting vehicle and LP attributes for recognition | - |
| Vehicle classifiers | Dropdown list of available vehicle classifiers for recognizing vehicle attributes. The system will recognize only the selected attributes. Excluded attributes will not be recognized. Changing the attribute list affects system performance: the fewer attributes selected, the better the performance | - Brands and models |
| - Vehicle type | ||
| - Emergency service type | ||
| - Color | ||
| - Descriptor | ||
| - Public transport type | ||
| - Special transport type | ||
| - Number of Axles | ||
| - Vehicle orientation | ||
| LP classifiers | Dropdown list of available LP classifiers for recognizing LP attributes. The system will recognize only the selected attributes. Excluded attributes will not be recognized. Changing the attribute list affects system performance: the fewer attributes selected, the better the performance | LP symbols and country |
| Save full frames | Controls saving of full frames | On/Off |
| Save all best shots | By default, only one best frame is saved, the frame of the beginning and end of the track. This flag activates the saving of all the best frames received by the system | On/Off |
| Attributes correction parameters | Section for setting filtering parameters for vehicle attributes to exclude conflicting values, e.g., when both emergency service type and public transport type are recognized for the same vehicle. Thresholds for recognition accuracy are set for these attributes. If the recognition accuracy is below the specified threshold, the attribute is ignored, and the accuracy is set to 0. For more information, see Attribute correction parameters in the document «CARS_API. Administrator's Guide» | On/Off |
| Model/Brand threshold | Recognition threshold for «Vehicle Brand» and «Model» attributes. Recognitions below the specified accuracy are not considered in the recognition results | 0.0000…1.0000 |
| Vehicle type threshold | The threshold value of recognition accuracy for the attribute «Vehicle type». Recognitions with an accuracy below the specified one are not taken into account when displaying recognition results | 0.0000…1.0000 |
| Emergency services type Threshold | The threshold value of recognition accuracy for the «Emergency Services» attribute. Recognitions with an accuracy below the specified one are not taken into account when displaying recognition results | 0.0000…1.0000 |
| Special transport type threshold | Recognition threshold for «Special Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0.0000…1.0000 |
| Public transport type threshold | Recognition threshold for «Public Transport» attribute. Recognitions below the specified accuracy are not considered in the recognition results | 0.0000…1.0000 |
Camera parameter interface in the «Settings» block for the RTSP buffer source type (Figure 99). The settings are described in Table 60.
Table 60. Description of Camera Settings for the RTSP Buffer source type
| Name | Description | Possible Values |
|---|---|---|
| Stream buffer parameters | Allows configuring how frames are sampled from the input buffer to the output stream: sampling interval (delay in ms) and buffer size (number of stored frames) | - |
| Frame acquire delay in milliseconds | The interval between frame sampling from the RTSP buffer into the output stream. Affects the effective frame rate and delay | 1…100000 |
| Stream buffer size | The number of frames stored in the RTSP buffer for subsequent sampling | 1…1000 |
Shooting types#
This parameter modifies the object tracking algorithm. An incorrect shooting type may cause track breaks or duplicate events. The system provides 4 types of shooting. Each shooting type is used depending on the tasks and conditions of video surveillance:
1․ Close View camera Suitable for cameras installed at a height of 2 to 3 meters, at least 1.5 meters away from the lane. It can be used to register passing events, both without obstacles and without stopping, as well as for events where the vehicle stops briefly (while waiting for an obstacle to be opened) in front of the obstacle. There are no restrictions on the number of vehicles in the tracking zone.
2․ City surveillance camera Suitable for road cameras installed to monitor urban or highway areas, such as city roads, highways, and expressways. Cameras should be installed at a height of more than 4 meters (except for vertical shooting angles). There are no restrictions on the number of vehicles in the tracking zone.
3․ Bird View camera This shooting type involves shooting from a high altitude (including vertical angles, bird-view), for example, from drones. There are no restrictions on the number of vehicles in the tracking zone.
4․ Control point Suitable for inspection areas with a barrier or stop line, where vehicles stop for a long time to undergo inspection procedures and registration in external systems involving a person. Cameras should be installed at a height of 2 to 3 meters and can be located close to the lane. The number of vehicles in the tracking zone should not exceed two.
Classifiers#
Classifier - an object of the CARS_API subsystem designed to recognize specific attributes of a vehicle or a license plate. Classifiers make it possible to identify vehicle or license plate attributes. Results are displayed in the event or incident card.
The table below shows the correspondence between classifiers used in camera settings in the CARS_Analytics interface and the corresponding classifiers of the LUNA CARS_API subsystem. It reflects the mapping of attribute data fields and the LUNA CARS_API classifiers used to determine these attributes.
Table 61. Classifier correspondence table
| Classifier in camera settings | Classifier in LUNA CARS_API |
|---|---|
| Vehicle type | vehicle_type |
| Brands and models | car_brand_model |
| Vehicle color | detailed_vehicle_color |
| Vehicle emergency type | detailed_vehicle_emergency |
| Public transport type | public_transport_type |
| Special transport type | special_transport_type |
| Vehicle wheel axles count | vehicle_axles |
| Vehicle descriptor | vehicle_descriptor |
| Vehicle orientation | vehicle_orientation |
| License plate symbols and country | grz_all_countries |
Camera geolocation#
This section contains fields for entering camera location information (Figure 100).
Filling in these fields will allow the camera’s precise location to be displayed on the map when you click on the corresponding icon in the camera list. By default, each camera will have the coordinates.
Detection Zone#
The detection zone defines the area of interest processed by CARS_Stream for object detection and tracking purposes. It can be configured as a rectangular area within the frame.
To configure and edit the detection zone, click on the "Edit detection zone" button.
CARS_Stream processes only the area within the selected rectangle.
Correctly defining the detection zone significantly improves the performance of LUNA CARS. The detection zone can be used to exclude parts of the frame where object detection and attribute identification (e.g., vehicle and license plate attributes) is unnecessary.
Example of adding a detection zone (Figure 101).
The detection zone is displayed on the preview as a yellow rectangular area (only the selected area will be processed by CARS_Stream).
Editing the Detection Zone#
To edit the detection zone, click on «Edit detection zone».
To change the shape of the detection zone, drag one of the control points located around the perimeter.
To move the detection zone, click on the yellow area and drag it within the preview.
To reset the detection zone, click on the «Full Frame» button.
Below the preview are the geometric parameters of the detection zone:
- Frame size – the size of the full preview frame;
- Detection zone coordinates – the «Х» and «Х» coordinates, with the starting point at the top-left corner of the preview;
- Detection zone width and height.
After completing the detection zone settings, click the «Save» button in the lower-left corner of the window. To cancel adding the zone, press [Esc] on your keyboard or click anywhere outside the form.
If you need to start adding the zone again, click the «Reset» button.
Recognition Zone#
The recognition zone defines the area for event recognition within the detection zone. It is a polygon that can be adjusted freely on the camera preview. CARS_Analytics UI supports the creation of an unlimited number of recognition zones.
The interaction scenarios between objects and the recognition zone are determined by the handler (for more information, see section Handlers).
Object detection and tracking are performed throughout the detection zone, but the best frame is selected only for objects that intersect the recognition zone above the defined threshold. Using the recognition zone allows limiting the area for selecting the best frame without losing track information outside the recognition zone.
The principle of interaction between the object’s BBox and the recognition zone (Figure 102).
Si is the area of the recognition zone (red color), which is covered by the object’s BBox for the i-th frame. The value of Si will be compared with the threshold. The scenario is triggered when the value of Si exceeds the threshold.
When building zones, it is important to consider the direction of the object’s movement around the zone, as the area of intersection between the BBox and recognition zone will differ on different frames and for different movement trajectories.
Example of «trails» of vehicle trajectories (Figure 103).
The final area of intersection between the BBox and the recognition zone is marked in red, and in yellow, the BBox 2. The area of intersection for BBox 1 is larger than that for BBox 2, and the best frame will be selected only for BBox 1. If the threshold value for intersection and movement direction is not set, both BBoxes will be processed by the system.
The list of created recognition zones can be found in the camera settings (Figure 104). Each zone has a color corresponding to the zone's name color.
Creating and Editing a Recognition Zone#
To create a recognition zone, click the «Add Recognition Zone» button.
To edit a recognition zone, click the
button in the row of the required zone. The recognition zone management window will open (Figure 105).
It is recommended to build recognition zones in the camera's plane without referring to the surrounding environment, as the check for the object’s inclusion in the zone happens with rectangular BBoxes, not arbitrary object shapes. Recommendations for creating zones for industrial use with handler examples are provided in Handlers section.
Algorithm for creating a recognition zone:
1․ Create a recognition zone: To create a recognition zone, set the vertices of the polygon that will represent it. To add a vertex, click the left mouse button on the camera preview. It is recommended not to use more than 10 vertices. Ensure that the polygon's edges do not intersect and that the recognition zone is larger than the BBox of the detectable objects.
You can also create a recognition zone by selecting grid sectors. To do this, first configure the number of columns and rows in the marking grid (Figure 106). Enable the grid using the switch to set the sectors (Figure 107). To select a grid cell, click on it. A repeated click will deselect it. An example of using the marking grid can be found in the Handlers section for the «Smoke-fire tracking» handler.
2․ Confirm the creation of the zone: To finish creating the recognition zone, click on the first vertex of the polygon or select the required grid cells. All neighboring cells must be connected by at least one edge.
3․ Save the recognition zone: Click the «Save» button below the preview. To reset the created polygon or selected grid cells, click «Delete».
4․ Enter the zone name: Enter the name of the recognition zone in the «Name of the recognition zone» field. The name can contain from 1 to 255 characters, including letters, numbers, and symbols.
5․ Select the zone color: Choose a color for the recognition zone or leave the default color. The color can be selected from the proposed list or entered as a HEX code.
6․ Complete the creation of the zone: To complete the operation, click the «Save» button at the bottom of the form. To cancel the changes, press [Esc] on the keyboard or click the left mouse button in the area around the window.
Deleting Recognition Zone#
To delete a recognition zone, click the
button in the zone's row. A warning will appear (Figure 108), where you must confirm the action by clicking the «Delete» button or cancel the action by clicking the «Cancel» button.
Deletion is not possible if the zone is used in a handler.
Batch edit camera settings#
With the «Batch Edit» button, you can edit the settings for multiple cameras that use the same source type (Figure 109).
When you click the button, a window will open where you need to select the cameras:
1․ Select the camera type (source type): RTSP stream, ANPR camera, or RTSP buffer. You can select only one source type. 2․ In the dropdown menu, select the cameras (at least two) that use the chosen source type.
When you click the «Download preview from Cars Stream» button, a task will be created to load the preview from Cars Stream for all the selected cameras.
After selecting the required cameras, click the «Next» button to continue and configure the settings for the selected cameras.
After clicking «Next», a new page will open with camera setting blocks corresponding to the selected source type. All parameter descriptions are provided in detail above in the Camera Settings section (Figure 110).
You can adjust the parameters in each section, but to apply the changes, you must switch the toggle to the «On» position for the selected section.
Once all parameters are configured, click the «Save» button in the upper-right corner to apply the changes.
Restarting the Camera#
In cases where the camera has been disconnected due to an error (status indicator is red), the administrator can manually restart the camera. The restart button
is located in the camera's row.
When restarting the camera, the connection to the source is re-established using the parameters specified in the camera settings.
Editing the Camera#
To edit the camera, click the
button in the camera's row.
A camera settings form will open, where any parameters can be changed.
Deleting the Camera#
To delete the camera, it can only be done while editing the camera. The delete button is located at the top of the camera page.
Click the «Delete» button, and a warning will appear (Figure 111), where you must confirm the action by clicking the «Delete» button or cancel the action by clicking the «Cancel» button.
Viewing the Camera Location#
To view the camera's location on the map, click on
button in the camera row.
A Yandex.Maps window will open in the web browser (Figure 112). The window displays the camera’s location marker on the map, along with the coordinates and address corresponding to the coordinates.
The camera's geolocation is tied through the administrator interface, as described in the «CARS_Analytics. Administrator’s Guide».
Video Stream Viewing#
The user can view the camera video stream in real time. This view is used for monitoring and debugging the camera operation and the video processing pipeline.
The following video preview modes are available:
- Simple — a preview mode that displays only the bounding boxes (BBoxes) of detected objects;
- Interactive — a preview mode that displays the bounding boxes (BBoxes) of detected objects and the recognized attributes.
By default, the video preview is unavailable because in the camera parameters settings, under «General parameters → Stream preview mode», the value «Disabled» is set (Figure 113). To enable the preview, select one of the available modes («Simple» or «Interactive») in this section and save the changes.
In the same section, you can change the preview mode or disable the preview at any time.
To view the processed video stream in the «Cameras» section», click on
button in the row of the required camera (Figure 114). A browser window will open where CARS_Stream performs real-time detection and tracking of objects within the registration zone. The frame boundaries are defined by the detection zone settings.
Example of simple preview mode (Figure 115). This mode displays BBox of detected objects and renders the created zones.
Example of interactive preview mode (Figure 116). In this mode, checkboxes can be used to enable or disable the display of zones configured for the camera, as well as to navigate to the event/incident card by clicking on the corresponding element in the preview.




