Skip to content

Commit 947af32

Browse files
committed
update
1 parent 2416e11 commit 947af32

File tree

4 files changed

+10
-82
lines changed

4 files changed

+10
-82
lines changed

README.md

Lines changed: 8 additions & 80 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,12 @@
1-
[cars-yolo-output]: examples/assets/cars.gif "Sample Output with YOLO"
2-
[cows-tf-ssd-output]: examples/assets/cows.gif "Sample Output with SSD"
1+
# 应用领域
32

4-
# Multi-object trackers in Python
5-
Easy to use implementation of various multi-object tracking algorithms.
3+
本文立足于**将超声悬浮技术应用于超疏水表面上的液滴操控系统**,并在此基础上搭建**以机器视觉**为辅助的三轴式液滴操控系统。本文目的是利用神经网络实现液滴的目标检测与目标跟踪问题,实现液滴的自动化操控,并且提高液滴的操控精度。通过一个轻量化网络,使得在边缘计算设备上也能运行精准的液滴目标检测与跟踪算法。
64

7-
[![DOI](https://zenodo.org/badge/148338463.svg)](https://zenodo.org/badge/latestdoi/148338463)
5+
## Available Object Detector
86

9-
10-
`YOLOv3 + CentroidTracker` | `TF-MobileNetSSD + CentroidTracker`
11-
:-------------------------:|:-------------------------:
12-
![Cars with YOLO][cars-yolo-output] | ![Cows with tf-SSD][cows-tf-ssd-output]
13-
Video source: [link](https://flic.kr/p/L6qyxj) | Video source: [link](https://flic.kr/p/26WeEWy)
7+
```
8+
NanoDet-Plus
9+
```
1410

1511
## Available Multi Object Trackers
1612

@@ -21,18 +17,8 @@ CentroidKF_Tracker
2117
SORT
2218
```
2319

24-
## Available OpenCV-based object detectors:
25-
26-
```
27-
detector.TF_SSDMobileNetV2
28-
detector.Caffe_SSDMobileNet
29-
detector.YOLOv3
30-
```
31-
3220
## Installation
3321

34-
Pip install for OpenCV (version 3.4.3 or later) is available [here](https://pypi.org/project/opencv-python/) and can be done with the following command:
35-
3622
```
3723
git clone https://github.com/vvEverett/multi-object-tracker.git
3824
cd multi-object-tracker
@@ -42,65 +28,7 @@ python setup.py develop
4228
python setup_nanodet.py develop
4329
```
4430

45-
**Note - for using neural network models with GPU**
46-
For using the opencv `dnn`-based object detection modules provided in this repository with GPU, you may have to compile a CUDA enabled version of OpenCV from source.
47-
* To build opencv from source, refer the following links:
48-
[[link-1](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html)],
49-
[[link-2](https://www.pyimagesearch.com/2020/02/03/how-to-use-opencvs-dnn-module-with-nvidia-gpus-cuda-and-cudnn/)]
50-
51-
## How to use?: Examples
31+
## How to use?
5232

53-
The interface for each tracker is simple and similar. Please refer the example template below.
54-
55-
```
56-
from motrackers import CentroidTracker # or IOUTracker, CentroidKF_Tracker, SORT
57-
input_data = ...
58-
detector = ...
59-
tracker = CentroidTracker(...) # or IOUTracker(...), CentroidKF_Tracker(...), SORT(...)
60-
while True:
61-
done, image = <read(input_data)>
62-
if done:
63-
break
64-
detection_bboxes, detection_confidences, detection_class_ids = detector.detect(image)
65-
# NOTE:
66-
# * `detection_bboxes` are numpy.ndarray of shape (n, 4) with each row containing (bb_left, bb_top, bb_width, bb_height)
67-
# * `detection_confidences` are numpy.ndarray of shape (n,);
68-
# * `detection_class_ids` are numpy.ndarray of shape (n,).
69-
output_tracks = tracker.update(detection_bboxes, detection_confidences, detection_class_ids)
70-
# `output_tracks` is a list with each element containing tuple of
71-
# (<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>)
72-
for track in output_tracks:
73-
frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track
74-
assert len(track) == 10
75-
print(track)
76-
```
77-
78-
Please refer [examples](https://github.com/adipandas/multi-object-tracker/tree/master/examples) folder of this repository for more details. You can clone and run the examples.
79-
80-
## Pretrained object detection models
81-
82-
You will have to download the pretrained weights for the neural-network models.
83-
The shell scripts for downloading these are provided [here](https://github.com/adipandas/multi-object-tracker/tree/master/examples/pretrained_models) below respective folders.
84-
Please refer [DOWNLOAD_WEIGHTS.md](https://github.com/adipandas/multi-object-tracker/blob/master/DOWNLOAD_WEIGHTS.md) for more details.
85-
86-
### Notes
87-
* There are some variations in implementations as compared to what appeared in papers of `SORT` and `IoU Tracker`.
88-
* In case you find any bugs in the algorithm, I will be happy to accept your pull request or you can create an issue to point it out.
89-
90-
## References, Credits and Contributions
91-
Please see [REFERENCES.md](https://github.com/adipandas/multi-object-tracker/blob/master/docs/readme/REFERENCES.md) and [CONTRIBUTING.md](https://github.com/adipandas/multi-object-tracker/blob/master/docs/readme/CONTRIBUTING.md).
92-
93-
## Citation
94-
95-
If you use this repository in your work, please consider citing it with:
96-
```
97-
@misc{multiobjtracker_amd2018,
98-
author = {Deshpande, Aditya M.},
99-
title = {Multi-object trackers in Python},
100-
year = {2020},
101-
publisher = {GitHub},
102-
journal = {GitHub repository},
103-
howpublished = {\url{https://github.com/adipandas/multi-object-tracker}},
104-
}
105-
```
33+
运行main.py即可开启对test.avi的液滴目标检测与跟踪。
10634

main.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@
55
from motrackers.utils import draw_tracks
66
from nanodet.util import Logger, cfg, load_config, load_model_weight
77

8-
VIDEO_FILE = r"D:\shijue\LiquidDrop\22.avi"
9-
WEIGHTS_PATH = 'weight/LiquidV4.pth'
8+
VIDEO_FILE = "test.avi"
9+
WEIGHTS_PATH = 'weight/LiquidV5.pth'
1010
CONFIG_FILE_PATH = 'config/LiquidDetect416.yml'
1111
CHOSEN_TRACKER = 'SORT'
1212
CONFIDENCE_THRESHOLD = 0.4 # 目标检测的置信度筛选

test.avi

1.14 MB
Binary file not shown.
Binary file not shown.

0 commit comments

Comments
 (0)