diff --git a/README.md b/README.md index cee826a..20f0840 100644 --- a/README.md +++ b/README.md @@ -52,6 +52,32 @@ For using the opencv `dnn`-based object detection modules provided in this repos Please refer [examples](https://github.com/adipandas/multi-object-tracker/tree/master/examples) folder of this repository. You can clone and run the examples as shown [here](examples/readme.md). +The interface for each tracker is simple and similar. + +``` +from mottrackers import CentroidTracker # IOUTracker, CentroidKF_Tracker, SORT + +input_data = ... +detector = ... +tracker = CentroidTracker(...) + +while True: + done, image = + if done: + break + + detection_bboxes, detection_confidences, detection_class_ids = detector.detect(image) + + output_tracks = tracker.track(detection_bboxes, detection_confidences, detection_class_ids) + + # `output_tracks` is a list with each element containing tuple of + # (, , , , , , , , , ) + for track in output_tracks: + frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track + assert len(track) == 10 + print(track) +``` + ## Pretrained object detection models You will have to download the pretrained weights for the neural-network models. @@ -62,9 +88,9 @@ Please refer [DOWNLOAD_WEIGHTS.md](DOWNLOAD_WEIGHTS.md) for more details. * There are some variations in implementations as compared to what appeared in papers of `SORT` and `IoU Tracker`. * In case you find any bugs in the algorithm, I will be happy to accept your pull request or you can create an issue to point it out. -## References and Credits +## References, Credits and Contributions -Please see [REFERENCES.md](REFERENCES.md). +Please see [REFERENCES.md](docs/readme/REFERENCES.md) and [CONTRIBUTING.md](docs/readme/CONTRIBUTING.md). ## Citation diff --git a/CODE_OF_CONDUCT.md b/docs/readme/CODE_OF_CONDUCT.md similarity index 100% rename from CODE_OF_CONDUCT.md rename to docs/readme/CODE_OF_CONDUCT.md diff --git a/CONTRIBUTING.md b/docs/readme/CONTRIBUTING.md similarity index 100% rename from CONTRIBUTING.md rename to docs/readme/CONTRIBUTING.md diff --git a/REFERENCES.md b/docs/readme/REFERENCES.md similarity index 82% rename from REFERENCES.md rename to docs/readme/REFERENCES.md index 655429e..e04e8cb 100644 --- a/REFERENCES.md +++ b/docs/readme/REFERENCES.md @@ -9,5 +9,6 @@ This work is based on the following literature: 2. Bewley, A., Ge, Z., Ott, L., Ramos, F., & Upcroft, B. (2016, September). Simple online and realtime tracking. In 2016 IEEE International Conference on Image Processing (ICIP) (pp. 3464-3468). IEEE. [[arxiv](https://arxiv.org/abs/1602.00763)] 3. YOLOv3. [[pdf](https://pjreddie.com/media/files/papers/YOLOv3.pdf)][[website](https://pjreddie.com/darknet/yolo/)] 4. Kalman Filter. [[wiki](https://en.wikipedia.org/wiki/Kalman_filter)] -5. TensorFlow Object Detection API [[github](https://github.com/tensorflow/models/tree/master/research/object_detection)] -6. Caffe [[website](https://caffe.berkeleyvision.org/)][[github](https://github.com/BVLC/caffe)] +5. TensorFlow Object Detection API [[GitHub](https://github.com/tensorflow/models/tree/master/research/object_detection)] +6. Caffe [[website](https://caffe.berkeleyvision.org/)][[GitHub](https://github.com/BVLC/caffe)] +7. OpenCV. [[GitHub](https://github.com/opencv/opencv)] [[Website](https://opencv.org/)] diff --git a/examples/example_scripts/readme.md b/examples/example_scripts/readme.md new file mode 100644 index 0000000..c296962 --- /dev/null +++ b/examples/example_scripts/readme.md @@ -0,0 +1,7 @@ +## How to use? + +To see how to use these example scripts, simply type the following in the terminal: + +``` +python3 --help +``` diff --git a/examples/motmetrics_eval/readme.md b/examples/motmetrics_eval/readme.md index e1bed5d..2f847f9 100644 --- a/examples/motmetrics_eval/readme.md +++ b/examples/motmetrics_eval/readme.md @@ -1,7 +1,7 @@ ### MOT Challenge file format -[GitHub](https://github.com/adipandas/multi-object-tracker) -[Home](https://adipandas.github.io/multi-object-tracker/) +[[GitHub](https://github.com/adipandas/multi-object-tracker)] +[[Home](https://adipandas.github.io/multi-object-tracker/)] The file format should be the same as the ground truth file, which is a CSV text-file containing one object instance per line. diff --git a/requirements.txt b/requirements.txt index 59db306..dffd9bf 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,7 +1,7 @@ numpy scipy matplotlib -pandas opencv-contrib-python +pandas motmetrics setuptools