Skip to content

Commit fed490e

Browse files
committed
update track_demo and readme
1 parent 16b4b27 commit fed490e

File tree

5 files changed

+161
-42
lines changed

5 files changed

+161
-42
lines changed

README.md

Lines changed: 47 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,19 @@
22

33
## ❗❗Important Notes
44

5-
Compared to the previous version, this is an ***entirely new version (branch v2)***!!!
5+
There has been a major update recently, and I **re-organized** all the codes for accuracy and readability. More importantly, **two new sota trackers are added** (ImproAssoc and TrackTrack), and **TensorRT** engine is supported!
66

7-
**Please use this version directly, as I have almost rewritten all the code to ensure better readability and improved results, as well as to correct some errors in the past code.**
7+
The new version is on **branch v2.1**:
88

99
```bash
1010
git clone https://github.com/JackWoo0831/Yolov7-tracker.git
11-
git checkout v2 # change to v2 branch !!
11+
git checkout v2.1 # change to v2.1 branch !!
1212
```
1313

14-
🙌 ***If you have any suggestions for adding trackers***, please leave a comment in the Issues section with the paper title or link! Everyone is welcome to contribute to making this repo better.
14+
🙌 ***The QQ Group is established and welcome to join!*** You can raise bugs, suggestions, or work together on interesting CV/AI projects in the QQ group!
15+
However, bugs or issues should still be prioritized in the **Issue section in Github** for others to see.
16+
17+
<img src="figure/GroupQRcode.jpg" alt="group" style="width:40%;">
1518

1619
<div align="center">
1720

@@ -21,9 +24,24 @@ git checkout v2 # change to v2 branch !!
2124

2225
## 🗺️ Latest News
2326

24-
- ***2025.4.14*** Fix some minor bugs [issue#144](https://github.com/JackWoo0831/Yolov7-tracker/issues/144), and fix the lost tracklet bugs in sort.
25-
- ***2025.4.7*** Add more Re-ID modules (ShuffleNet, VehicleNet, MobileNet), fix some bugs (such as abandon bbox aspect ratio updating if the tracklet is not activated), and add some functions (customized low filter threshold, fuse detection score, etc.)
26-
- ***2025.4.3*** Support the newest ultralytics version (YOLO v3 ~ v12) and fix some bugs of hybrid sort.
27+
- ***2025.7.8*** New version 2.1 released. Add ImproAssoc, TrackTrack and support TensorRT. The other details are as follows:
28+
29+
<details>
30+
<summary>Update details</summary>
31+
32+
33+
1. Re annotate and organize all functions in `matching.py`
34+
2. For camera motion compensation, custom feature extraction algorithms (SIFT, ORB, ECC) can be used, and the `--cmc_method parameter` can be specified when running `track.py` (or `track_demo.py`).
35+
3. For methods such as BoT SORT and ByteTrack, the original low confidence screening threshold is fixed at 0.1 You can now manually set the `--conf_thresh_low` parameter when running `track.py`.
36+
4. Add the `init_thresh` parameter as the initialization target threshold, abandoning the original `args.conf + 0.1` setting. Specify the `--init_thresh` parameter when running `track.py`.
37+
5. In ReID feature extraction, the original crop size was a fixed value of `(h, w) = (128, 64)`, which can now be manually set. When running `track.py`, specify the `--reid_crop_size` parameter, for example, `--reid_crop_size 32 64`.
38+
6. Inherit all Trackers from the BaseTracker class to achieve good code reuse
39+
7. Fix the reid similarity calculation bug in Strongsort
40+
8. Abandon cython.bbox for better compatibility with numpy versions
41+
9. Abandon np.float, etc. for better compatibility with numpy versions
42+
10. Reorganize requirements.txt
43+
</details>
44+
2745

2846
## ❤️ Introduction
2947

@@ -45,7 +63,9 @@ and the tracker supports:
4563
- Strong SORT ([IEEE TMM 2023](https://arxiv.org/pdf/2202.13514))
4664
- Sparse Track ([arxiv 2306](https://arxiv.org/pdf/2306.05238))
4765
- UCMC Track ([AAAI 2024](http://arxiv.org/abs/2312.08952))
48-
- Hybrid SORT([AAAI 2024](https://ojs.aaai.org/index.php/AAAI/article/view/28471))
66+
- Hybrid SORT ([AAAI 2024](https://ojs.aaai.org/index.php/AAAI/article/view/28471))
67+
- ImproAssoc ([CVPRW 2023](https://openaccess.thecvf.com/content/CVPR2023W/E2EAD/papers/Stadler_An_Improved_Association_Pipeline_for_Multi-Person_Tracking_CVPRW_2023_paper.pdf))
68+
- TrackTrack ([CVPR 2025](https://openaccess.thecvf.com/content/CVPR2025/html/Shim_Focusing_on_Tracks_for_Online_Multi-Object_Tracking_CVPR_2025_paper.html))
4969

5070
and the reid model supports:
5171

@@ -188,6 +208,12 @@ For example:
188208
python tracker/track_demo.py --obj M0203.mp4 --detector yolo_ultra_v8 --tracker deepsort --kalman_format byte --detector_model_path weights/yolov8l_UAVDT_60epochs_20230509.pt --save_images
189209
```
190210

211+
or
212+
213+
```bash
214+
python tracker/track_demo.py --obj /root/datasets/visdrone/images/val/seq/ --detector yolox --tracker bytetrack --kalman_format byte --detector_model_path weights/yolox_m_VisDrone_55epochs_20230509.pth.tar --yolox_exp_file ./tracker/yolox_utils/yolox_m.py --save_images
215+
```
216+
191217
**If you want to run trackers on dataset**:
192218

193219
```bash
@@ -240,6 +266,19 @@ In addition, you can also specify
240266
>
241267
> 2. The code does not contain the camera motion compensation part between every two frame, please refer to [https://github.com/corfyi/UCMCTrack/issues/12](https://github.com/corfyi/UCMCTrack/issues/12). From my perspective, since the algorithm name is 'uniform', the update of compensation between every two frames is not necessary.
242268
269+
### ✨ TensorRT Convert and Inference
270+
271+
This code supports **fully automatic** generation and reasoning of Tensor RT engine, **which can be used for both detection model and ReID model**. If you have not converted Tensor RT engine, just add `--trt` parameter when running, for example:
272+
273+
```bash
274+
python tracker/track.py --dataset mot17 --detector yolox --tracker ocsort --kalman_format ocsort --detector_model_path weights/bytetrack_m_mot17.pth.tar --reid --reid_model shufflenet_v2_x1_0 --reid_model_path shufflenetv2_x1-5666bf0f80.pth --trt
275+
```
276+
277+
If you already have engine, just write the relevant path as engine, and parameter `--trt` can be omitted:
278+
279+
```bash
280+
python tracker/track.py --dataset visdrone_part --detector b8_ultra --tracker deepsort --kalman_format byte --detector_model_path weights/yolov8l_VisDroneDet_35epochs_20230605.engine --reid deepsort --reid_model_path weights/ckpt.engine
281+
```
243282

244283
### ✅ Evaluation
245284

README_CN.md

Lines changed: 44 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,22 +2,39 @@
22

33
## ❗❗重要提示
44

5-
与之前的版本相比,这是一个***全新的版本(分支v2)***!!!
5+
最近有一个重大更新, 为了准确性和可读性, 我**重新组织了**所有代码. 更重要的是, **添加了两个新的sota跟踪器**(ImpraAssoc和TrackTrack), 并支持**TensorRT engine**.
66

7-
**请直接使用这个版本,因为我几乎重写了所有代码,以确保更好的可读性和改进的结果,并修正了以往代码中的一些错误。**
7+
新版本发布于**branch v2.1**:
88

99
```bash
1010
git clone https://github.com/JackWoo0831/Yolov7-tracker.git
11-
git checkout v2 # change to v2 branch !!
11+
git checkout v2.1 # change to v2.1 branch !!
1212
```
1313

14-
🙌 ***如果您有任何关于添加跟踪器的建议***,请在Issues部分留言并附上论文标题或链接!欢迎大家一起来让这个repo变得更好
14+
🙌 ***QQ交流群已建立, 欢迎加入!***,您可以在QQ群中提出bug、意见建议或者一起来做有趣的CV/AI项目!
15+
然而,**Bug或问题还是优先在issue区提出,以便让更多人看到.**
16+
17+
<img src="figure/GroupQRcode.jpg" alt="group" style="width:40%;">
1518

1619
## 🗺️ 最近更新
1720

18-
- ***2025.4.14*** 修复[issue#144](https://github.com/JackWoo0831/Yolov7-tracker/issues/144)中提到的一些bug,修复sort对丢失轨迹处理的bug.
19-
- ***2025.4.7*** 增加更多Re-ID模型 (ShuffleNet, VehicleNet, MobileNet), 修复一些bug (例如在轨迹为非活动状态时停止更新边界框长宽), 增加一些小功能 (例如可以修改两阶段关联策略的最低阈值,原来是固定的0.1; 增加将IoU和检测置信度融合的选项)
20-
- ***2025.4.3*** 增加了ultralytics库最新版本的支持,修复了hybrid sort中的一些bug.
21+
- ***2025.7.8*** 新版本2.1发布. 添加ImproAssoc, TrackTrack并支持TensorRT. 其他细节如下:
22+
23+
<details>
24+
<summary>更新细节</summary>
25+
26+
27+
1. 重新注释整理matching.py中所有函数
28+
2. 对于相机运动补偿, 可自定义特征提取子的算法(SIFT, ORB, ECC), 运行`track.py`时指定`--cmc_method`参数.
29+
3. 对于BoT-SORT, ByteTrack等方法, 原先的低置信度筛选阈值被固定设置为`0.1`. 现在可以手动设置, 运行`track.py`(或`track_demo.py`)时指定`--conf_thresh_low`参数.
30+
4. 加入`init_thresh`参数作为初始化目标阈值, 弃用原本的`args.conf + 0.1`定值. 运行`track.py`时指定`--init_thresh`参数.
31+
5. 在ReID特征提取中, 原本的裁剪-resize大小为定值`(h, w) = (128, 64)`, 现在可以手动设置, 运行`track.py`时指定`--reid_crop_size`参数, 例如`--reid_crop_size 32 64`.
32+
6. 将所有Tracker继承BaseTracker类以实现良好的代码复用
33+
7. 修复strongsort的reid相似度计算bug
34+
8. 弃用cython_bbox以获得更好的numpy版本兼容
35+
9. 弃用np.float等以获得更好的numpy版本兼容
36+
10. 重新整理requirements.txt
37+
</details>
2138

2239

2340
## ❤️ 介绍
@@ -41,6 +58,8 @@ git checkout v2 # change to v2 branch !!
4158
- Sparse Track ([arxiv 2306](https://arxiv.org/pdf/2306.05238))
4259
- UCMC Track ([AAAI 2024](http://arxiv.org/abs/2312.08952))
4360
- Hybrid SORT([AAAI 2024](https://ojs.aaai.org/index.php/AAAI/article/view/28471))
61+
- ImproAssoc ([CVPRW 2023](https://openaccess.thecvf.com/content/CVPR2023W/E2EAD/papers/Stadler_An_Improved_Association_Pipeline_for_Multi-Person_Tracking_CVPRW_2023_paper.pdf))
62+
- TrackTrack ([CVPR 2025](https://openaccess.thecvf.com/content/CVPR2025/html/Shim_Focusing_on_Tracks_for_Online_Multi-Object_Tracking_CVPR_2025_paper.html))
4463

4564
REID模型支持:
4665

@@ -231,12 +250,30 @@ python tracker/track.py --dataset ${dataset name, related with the yaml file} --
231250

232251
- Hybrid SORT: `python tracker/track.py --dataset visdrone_part --detector yolo_ultra --tracker hybridsort --kalman_format hybridsort --detector_model_path weights/yolov8l_VisDrone_35epochs_20230509.pt --save_images`
233252

253+
- ImproAssoc: `python tracker/track.py --dataset visdrone_part --detector yolo_ultra --tracker improassoc --kalman_format bot --detector_model_path weights/yolov8l_VisDrone_35epochs_20230509.pt --save_images`
254+
255+
- TrackTrack: `python tracker/track.py --dataset visdrone_part --detector yolo_ultra --tracker tracktrack --kalman_format bot --detector_model_path weights/yolov8l_VisDrone_35epochs_20230509.pt --save_images --nms_thresh 0.95 --reid`
256+
234257
>**UCMC Track的重要提示:**
235258
>
236259
> 1. 相机参数. UCMC Track需要相机的内参和外参. 请按照`tracker/cam_ram_files/uavdt/M0101.txt`的格式组织. 一个视频序列对应一个txt文件. 如果您没有标记的参数, 可以参考原始仓库中的估算工具箱([https://github.com/corfyi/UCMCTrack](https://github.com/corfyi/UCMCTrack)).
237260
>
238261
> 2. 该代码不包含每两帧之间的相机运动补偿部分, 请参阅[https://github.com/corfyi/UCMCTrack/issues/12](https://github.com/corfyi/UCMCTrack/issues/12). 在我看来, 既然算法叫"统一相机运动补偿", 因此不需要每两帧之间再更新补偿.
239262
263+
### ✨ TensorRT的转换与推理
264+
265+
该代码支持**全自动**的Tensor RT engine的生成与推理, **既可以用于检测模型, 也可以用于ReID模型**. 如果您还没有转换Tensor RT engine, 只需在运行时加上`--trt`参数, 例如:
266+
267+
```bash
268+
python tracker/track.py --dataset mot17 --detector yolox --tracker ocsort --kalman_format ocsort --detector_model_path weights/bytetrack_m_mot17.pth.tar --reid --reid_model shufflenet_v2_x1_0 --reid_model_path shufflenetv2_x1-5666bf0f80.pth --trt
269+
```
270+
271+
如果已有engine, 则直接将相关路径写成engine, 参数`--trt`可以省略:
272+
273+
```bash
274+
python tracker/track.py --dataset visdrone_part --detector b8_ultra --tracker deepsort --kalman_format byte --detector_model_path weights/yolov8l_VisDroneDet_35epochs_20230605.engine --reid deepsort --reid_model_path weights/ckpt.engine
275+
```
276+
240277
### ✅ 评估
241278

242279
马上推出!作为备选项,你可以使用这个repo: [Easier to use TrackEval repo](https://github.com/JackWoo0831/Easier_To_Use_TrackEval).

figure/GroupQRcode.jpg

168 KB
Loading

tracker/track.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,6 @@ def get_args():
120120
parser.add_argument('--trace', type=bool, default=False, help='traced model of YOLO v7')
121121
# other model path
122122
parser.add_argument('--reid_model_path', type=str, default='./weights/osnet_x0_25.pth', help='path for reid model path')
123-
parser.add_argument('--dhn_path', type=str, default='./weights/DHN.pth', help='path of DHN path for DeepMOT')
124123

125124

126125
"""other options"""

tracker/track_demo.py

Lines changed: 70 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,13 @@
6565
logger.warning('Load yolov8 fail. If you want to use yolov8, please check the installation.')
6666
pass
6767

68+
# TensorRT
69+
try:
70+
from accelerations.tensorrt_tools import TensorRTConverter, TensorRTInference
71+
except Exception as e:
72+
logger.warning(e)
73+
logger.warning('Load TensorRT fail. If you want to convert model to TensorRT, please install the packages.')
74+
6875
TRACKER_DICT = {
6976
'sort': SortTracker,
7077
'bytetrack': ByteTracker,
@@ -94,6 +101,7 @@ def get_args():
94101

95102
parser.add_argument('--kalman_format', type=str, default='default', help='use what kind of Kalman, sort, deepsort, byte, etc.')
96103
parser.add_argument('--img_size', type=int, default=1280, help='image size, [h, w]')
104+
parser.add_argument('--reid_crop_size', type=int, default=[128, 64], nargs='+', help='crop size in reid model, [h, w]')
97105

98106
parser.add_argument('--conf_thresh', type=float, default=0.2, help='filter tracks')
99107
parser.add_argument('--conf_thresh_low', type=float, default=0.1, help='filter low conf detections, used in two-stage association')
@@ -111,7 +119,6 @@ def get_args():
111119
parser.add_argument('--trace', type=bool, default=False, help='traced model of YOLO v7')
112120
# other model path
113121
parser.add_argument('--reid_model_path', type=str, default='./weights/osnet_x0_25.pth', help='path for reid model path')
114-
parser.add_argument('--dhn_path', type=str, default='./weights/DHN.pth', help='path of DHN path for DeepMOT')
115122

116123

117124
"""other options"""
@@ -130,6 +137,10 @@ def get_args():
130137

131138
"""camera parameter"""
132139
parser.add_argument('--camera_parameter_folder', type=str, default='./tracker/cam_param_files', help='folder path of camera parameter files')
140+
141+
"""tensorrt options"""
142+
parser.add_argument('--trt', action='store_true', help='use tensorrt engine to detect and reid')
143+
133144
return parser.parse_args()
134145

135146
def main(args):
@@ -156,48 +167,81 @@ def main(args):
156167
model.to(device)
157168
model.eval()
158169

159-
logger.info(f"loading detector {args.detector} checkpoint {args.detector_model_path}")
160-
ckpt = torch.load(args.detector_model_path, map_location=device)
161-
model.load_state_dict(ckpt['model'])
162-
logger.info("loaded checkpoint done")
163-
model = fuse_model(model)
164-
165-
stride = None # match with yolo v7
166-
167-
logger.info(f'Now detector is on device {next(model.parameters()).device}')
170+
if args.trt: # convert trt
171+
# check if need to convert
172+
if not args.detector_model_path.endswith('.engine'):
173+
trt_converter = TensorRTConverter(model, input_shape=[3, *model_img_size], ckpt_path=args.detector_model_path,
174+
min_opt_max_batch=[1, 1, 1], device=device, load_ckpt=True, ckpt_key='model')
175+
trt_converter.export()
176+
model = TensorRTInference(engine_path=trt_converter.trt_model, min_opt_max_batch=[1, 1, 1], device=device)
177+
else:
178+
model = TensorRTInference(engine_path=args.detector_model_path, min_opt_max_batch=[1, 1, 1], device=device)
179+
180+
else: # normal load
181+
logger.info(f"loading detector {args.detector} checkpoint {args.detector_model_path}")
182+
ckpt = torch.load(args.detector_model_path, map_location=device)
183+
model.load_state_dict(ckpt['model'])
184+
logger.info("loaded checkpoint done")
185+
model = fuse_model(model)
186+
logger.info(f'Now detector is on device {next(model.parameters()).device}')
187+
188+
stride = None # match with yolo v7
168189

169190
elif args.detector == 'yolov7':
170191

171-
logger.info(f"loading detector {args.detector} checkpoint {args.detector_model_path}")
172-
model = attempt_load(args.detector_model_path, map_location=device)
192+
if args.trt:
193+
# check if need to convert
194+
stride = 32
195+
model_img_size = check_img_size(args.img_size, s=32)
196+
if not args.detector_model_path.endswith('.engine'):
197+
model = attempt_load(args.detector_model_path, map_location=device)
198+
trt_converter = TensorRTConverter(model, input_shape=[3, *model_img_size], ckpt_path=args.detector_model_path,
199+
min_opt_max_batch=[1, 1, 1], device=device, load_ckpt=False)
200+
trt_converter.export()
201+
model = TensorRTInference(engine_path=trt_converter.trt_model, min_opt_max_batch=[1, 1, 1], device=device)
202+
else:
203+
model = TensorRTInference(engine_path=args.detector_model_path, min_opt_max_batch=[1, 1, 1], device=device)
173204

174-
# get inference img size
175-
stride = int(model.stride.max()) # model stride
176-
model_img_size = check_img_size(args.img_size, s=stride) # check img_size
205+
else:
206+
logger.info(f"loading detector {args.detector} checkpoint {args.detector_model_path}")
207+
model = attempt_load(args.detector_model_path, map_location=device)
177208

178-
# Traced model
179-
model = TracedModel(model, device=device, img_size=args.img_size)
180-
# model.half()
209+
# Traced model
210+
model = TracedModel(model, device=device, img_size=args.img_size)
211+
# model.half()
181212

182-
logger.info("loaded checkpoint done")
213+
logger.info("loaded checkpoint done")
214+
logger.info(f'Now detector is on device {next(model.parameters()).device}')
183215

184-
logger.info(f'Now detector is on device {next(model.parameters()).device}')
216+
# get inference img size
217+
stride = int(model.stride.max()) # model stride
218+
model_img_size = check_img_size(args.img_size, s=stride) # check img_size
185219

186220
elif 'ultra' in args.detector:
187221

188-
logger.info(f"loading detector {args.detector} checkpoint {args.detector_model_path}")
189-
model = YOLO(args.detector_model_path)
222+
if args.trt:
223+
# for ultralytics, we use the api provided by official ultralytics
224+
# check if need to convert
225+
if not args.detector_model_path.endswith('.engine'):
226+
model = YOLO(args.detector_model_path)
227+
model = YOLO(model.export(format="engine"))
228+
else:
229+
model = YOLO(args.detector_model_path)
230+
231+
else:
232+
logger.info(f"loading detector {args.detector} checkpoint {args.detector_model_path}")
233+
model = YOLO(args.detector_model_path)
234+
235+
logger.info("loaded checkpoint done")
190236

191237
model_img_size = [None, None]
192238
stride = None
193239

194-
logger.info("loaded checkpoint done")
195-
196240
else:
197241
logger.error(f"detector {args.detector} is not supprted")
198242
logger.error("If you want to use the yolo v8 by ultralytics, please specify the `--detector` \
199-
as the string including the substring `ultra`, \
200-
such as `yolo_ultra_v8` or `yolo11_ultralytics`")
243+
as the string including the substring `ultra`, \
244+
such as `yolo_ultra_v8` or `yolo11_ultralytics`")
201245
exit(0)
202246

203247
"""3. load sequences"""

0 commit comments

Comments
 (0)