You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> 1. Camera parameters. The UCMC Track need the intrinsic and extrinsic parameter of camera. Please organize like the format of `tracker/cam_param_files/uavdt/M0101.txt`. One video sequence corresponds to one txt file. If you do not have the labelled parameters, you can refer to the estimating toolbox in original repo ([https://github.com/corfyi/UCMCTrack](https://github.com/corfyi/UCMCTrack)).
266
271
>
267
272
> 2. The code does not contain the camera motion compensation part between every two frame, please refer to [https://github.com/corfyi/UCMCTrack/issues/12](https://github.com/corfyi/UCMCTrack/issues/12). From my perspective, since the algorithm name is 'uniform', the update of compensation between every two frames is not necessary.
268
273
274
+
>**Important Notes on Fast Tracker**
275
+
>
276
+
> In `fast_tracker.py`, the configuration related to the tracker is stored in the global variable `FAST_TRACKER_CONFIG`, which includes thresholds for recording occluded targets (such as velocity damping, bounding box enlargement, etc.) and environmental optimizations for road structure fusion (under the "ROIs" key, with specific values and meanings referenced in the original paper)
277
+
278
+
279
+
269
280
### ✨ TensorRT Convert and Inference
270
281
271
282
This code supports **fully automatic** generation and reasoning of Tensor RT engine, **which can be used for both detection model and ReID model**. If you have not converted Tensor RT engine, just add `--trt` parameter when running, for example:
0 commit comments