Skip to content

Commit 9e45f02

Browse files
committed
update readme
1 parent 1648c27 commit 9e45f02

File tree

8 files changed

+303
-238
lines changed

8 files changed

+303
-238
lines changed

README.md

Lines changed: 183 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
# YOLO detector and SOTA Multi-object tracker Toolbox
2+
3+
## ❗❗Important Notes
4+
5+
Compared to the previous version, this is an ***entirely new version (branch v2)***!!!
6+
7+
**Please use this version directly, as I have almost rewritten all the code to ensure better readability and improved results, as well as to correct some errors in the past code.**
8+
9+
```bash
10+
git clone https://github.com/JackWoo0831/Yolov7-tracker.git
11+
```
12+
13+
🙌 ***If you have any suggestions for adding trackers***, please leave a comment in the Issues section with the paper title or link! Everyone is welcome to contribute to making this repo better.
14+
15+
16+
17+
## ❤️ Introduction
18+
19+
This repo is a toolbox that implements the **tracking-by-detection paradigm multi-object tracker**. The detector supports:
20+
21+
- YOLOX
22+
- YOLO v7
23+
- YOLO v8,
24+
25+
and the tracker supports:
26+
27+
- SORT
28+
- DeepSORT
29+
- ByteTrack ([ECCV2022](https://arxiv.org/pdf/2110.06864))
30+
- Bot-SORT ([arxiv2206](https://arxiv.org/pdf/2206.14651.pdf))
31+
- OCSORT ([CVPR2023](https://openaccess.thecvf.com/content/CVPR2023/papers/Cao_Observation-Centric_SORT_Rethinking_SORT_for_Robust_Multi-Object_Tracking_CVPR_2023_paper.pdf))
32+
- C_BIoU Track ([arxiv2211](https://arxiv.org/pdf/2211.14317v2.pdf))
33+
- Strong SORT (***coming soon!***)
34+
35+
and the reid model supports:
36+
37+
- OSNet
38+
- Extractor from DeepSort
39+
40+
The highlights are:
41+
- Supporting more trackers than MMTracking
42+
- Rewrite multiple trackers with a ***unified code style***, without the need to configure multiple environments for each tracker
43+
- Modular design, which ***decouples*** the detector, tracker, reid model and Kalman filter for easy conducting experiments
44+
45+
![gif](figure/demo.gif)
46+
47+
## 🗺️ Roadmap
48+
49+
- [ ] Add StrongSort
50+
- [ ] Add save video function
51+
- [ ] Add timer function to calculate fps
52+
53+
## 🔨 Installation
54+
55+
The basic env is:
56+
- Ubuntu 18.04
57+
- Python:3.9, Pytorch: 1.12
58+
59+
Run following commond to install other packages:
60+
61+
```bash
62+
pip3 install -r requirements.txt
63+
```
64+
65+
### 🔍 Detector installation
66+
67+
1. YOLOX:
68+
69+
The version of YOLOX is **0.1.0 (same as ByteTrack)**. To install it, you can clone the ByteTrack repo somewhere, and run:
70+
71+
``` bash
72+
https://github.com/ifzhang/ByteTrack.git
73+
74+
python3 setup.py develop
75+
```
76+
77+
2. YOLO v7:
78+
79+
There is no need to execute addtional steps as the repo itself is based on YOLOv7.
80+
81+
3. YOLO v8:
82+
83+
Please run:
84+
85+
```bash
86+
pip3 install ultralytics==8.0.94
87+
```
88+
89+
### 📑 Data preparation
90+
91+
***If you do not want to test on the specific dataset, instead, you only want to run demos, please skip this section.***
92+
93+
***No matter what dataset you want to test, please organize it in the following way (YOLO style):***
94+
95+
```
96+
dataset_name
97+
|---images
98+
|---train
99+
|---sequence_name1
100+
|---000001.jpg
101+
|---000002.jpg ...
102+
|---val ...
103+
|---test ...
104+
105+
|
106+
107+
```
108+
109+
You can refer to the codes in `./tools` to see how to organize the datasets.
110+
111+
***Then, you need to prepare a `yaml` file to indicate the path so that the code can find the images.***
112+
113+
Some examples are in `tracker/config_files`. The important keys are:
114+
115+
```
116+
DATASET_ROOT: '/data/xxxx/datasets/MOT17' # your dataset root
117+
SPLIT: test # train, test or val
118+
CATEGORY_NAMES: # same in YOLO training
119+
- 'pedestrian'
120+
121+
CATEGORY_DICT:
122+
0: 'pedestrian'
123+
```
124+
125+
126+
127+
## 🚗 Practice
128+
129+
### 🏃 Training
130+
131+
Trackers generally do not require parameters to be trained. Please refer to the training methods of different detectors to train YOLOs.
132+
133+
Some references may help you:
134+
135+
- YOLOX: `tracker/yolox_utils/train_yolox.py`
136+
137+
- YOLO v7:
138+
139+
```shell
140+
python train_aux.py --dataset visdrone --workers 8 --device <$GPU_id$> --batch-size 16 --data data/visdrone_all.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights <$YOLO v7 pretrained model path$> --name yolov7-w6-custom --hyp data/hyp.scratch.custom.yaml
141+
```
142+
143+
- YOLO v8: `tracker/yolov8_utils/train_yolov8.py`
144+
145+
146+
147+
### 😊 Tracking !
148+
149+
If you only want to run a demo:
150+
151+
```bash
152+
python tracker/track_demo.py --obj ${video path or images folder path} --detector ${yolox, yolov8 or yolov7} --tracker ${tracker name} --kalman_format ${kalman format, sort, byte, ...} --detector_model_path ${detector weight path} --save_images
153+
```
154+
155+
For example:
156+
157+
```bash
158+
python tracker/track_demo.py --obj M0203.mp4 --detector yolov8 --tracker deepsort --kalman_format byte --detector_model_path weights/yolov8l_UAVDT_60epochs_20230509.pt --save_images
159+
```
160+
161+
If you want to run trackers on dataset:
162+
163+
```bash
164+
python tracker/track.py --dataset ${dataset name, related with the yaml file} --detector ${yolox, yolov8 or yolov7} --tracker ${tracker name} --kalman_format ${kalman format, sort, byte, ...} --detector_model_path ${detector weight path}
165+
```
166+
167+
For example:
168+
169+
- SORT: `python tracker/track.py --dataset uavdt --detector yolov8 --tracker sort --kalman_format sort --detector_model_path weights/yolov8l_UAVDT_60epochs_20230509.pt `
170+
171+
- DeepSORT: `python tracker/track.py --dataset uavdt --detector yolov7 --tracker deepsort --kalman_format byte --detector_model_path weights/yolov7_UAVDT_35epochs_20230507.pt`
172+
173+
- ByteTrack: `python tracker/track.py --dataset uavdt --detector yolov8 --tracker bytetrack --kalman_format byte --detector_model_path weights/yolov8l_UAVDT_60epochs_20230509.pt`
174+
175+
- OCSort: `python tracker/track.py --dataset uavdt --detector yolov8 --tracker ocsort --kalman_format ocsort --detector_model_path weights/yolov8l_UAVDT_60epochs_20230509.pt`
176+
177+
- C-BIoU Track: `python tracker/track.py --dataset uavdt --detector yolov8 --tracker c_bioutrack --kalman_format bot --detector_model_path weights/yolov8l_UAVDT_60epochs_20230509.pt`
178+
179+
- BoT-SORT: `python tracker/track.py --dataset uavdt --detector yolox --tracker botsort --kalman_format bot --detector_model_path weights/yolox_m_uavdt_50epochs.pth.tar`
180+
181+
### ✅ Evaluation
182+
183+
Coming Soon. As an alternative, after obtaining the result txt file, you can use the [Easier to use TrackEval repo](https://github.com/JackWoo0831/Easier_To_Use_TrackEval).

0 commit comments

Comments
 (0)