Skip to content

Commit 57d544e

Browse files
committed
Update and Code cleaning
1 parent f463abe commit 57d544e

20 files changed

+803
-2153
lines changed

.gitignore

Lines changed: 144 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
.ipynb_checkpoints/
2+
video_data/
3+
!video_data/readme.md
4+
.idea/
5+
examples/output.avi
6+
7+
pretrained_models/caffemodel_weights/
8+
!pretrained_models/caffemodel_weights/get_caffemodel.sh
9+
10+
pretrained_models/tensorflow_weights/
11+
!pretrained_models/tensorflow_weights/get_ssd_model.sh
12+
13+
pretrained_models/yolo_weights/
14+
!pretrained_models/yolo_weights/get_yolo.sh
15+
16+
# Byte-compiled / optimized / DLL files
17+
__pycache__/
18+
*.py[cod]
19+
*$py.class
20+
21+
# C extensions
22+
*.so
23+
24+
# Distribution / packaging
25+
.Python
26+
build/
27+
develop-eggs/
28+
dist/
29+
downloads/
30+
eggs/
31+
.eggs/
32+
lib/
33+
lib64/
34+
parts/
35+
sdist/
36+
var/
37+
wheels/
38+
pip-wheel-metadata/
39+
share/python-wheels/
40+
*.egg-info/
41+
.installed.cfg
42+
*.egg
43+
MANIFEST
44+
45+
# PyInstaller
46+
# Usually these files are written by a python script from a template
47+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
48+
*.manifest
49+
*.spec
50+
51+
# Installer logs
52+
pip-log.txt
53+
pip-delete-this-directory.txt
54+
55+
# Unit test / coverage reports
56+
htmlcov/
57+
.tox/
58+
.nox/
59+
.coverage
60+
.coverage.*
61+
.cache
62+
nosetests.xml
63+
coverage.xml
64+
*.cover
65+
*.py,cover
66+
.hypothesis/
67+
.pytest_cache/
68+
69+
# Translations
70+
*.mo
71+
*.pot
72+
73+
# Django stuff:
74+
*.log
75+
local_settings.py
76+
db.sqlite3
77+
db.sqlite3-journal
78+
79+
# Flask stuff:
80+
instance/
81+
.webassets-cache
82+
83+
# Scrapy stuff:
84+
.scrapy
85+
86+
# Sphinx documentation
87+
docs/_build/
88+
89+
# PyBuilder
90+
target/
91+
92+
# Jupyter Notebook
93+
.ipynb_checkpoints
94+
95+
# IPython
96+
profile_default/
97+
ipython_config.py
98+
99+
# pyenv
100+
.python-version
101+
102+
# pipenv
103+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
104+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
105+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
106+
# install all needed dependencies.
107+
#Pipfile.lock
108+
109+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
110+
__pypackages__/
111+
112+
# Celery stuff
113+
celerybeat-schedule
114+
celerybeat.pid
115+
116+
# SageMath parsed files
117+
*.sage.py
118+
119+
# Environments
120+
.env
121+
.venv
122+
env/
123+
venv/
124+
ENV/
125+
env.bak/
126+
venv.bak/
127+
128+
# Spyder project settings
129+
.spyderproject
130+
.spyproject
131+
132+
# Rope project settings
133+
.ropeproject
134+
135+
# mkdocs documentation
136+
/site
137+
138+
# mypy
139+
.mypy_cache/
140+
.dmypy.json
141+
dmypy.json
142+
143+
# Pyre type checker
144+
.pyre/

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
Aditya - @adipandas
1+
Aditya M. Deshpande - @adipandas

README.md

Lines changed: 61 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,62 +1,94 @@
1-
[output_video_1]: ./assets/sample-output.gif "Sample Output with YOLO"
2-
[output_video_2]: ./assets/sample-output-2.gif "Sample Output with SSD"
1+
[cars-yolo-output]: ./assets/cars.gif "Sample Output with YOLO"
2+
[cows-tf-ssd-output]: ./assets/cows.gif "Sample Output with SSD"
33

4-
# Multi-Object-Tracker
4+
# multi-object-tracker
55
Object detection using deep learning and multi-object tracking
66

77
[![DOI](https://zenodo.org/badge/148338463.svg)](https://zenodo.org/badge/latestdoi/148338463)
88

99

1010
#### YOLO
11-
![Output Sample with YOLO][output_video_1]
11+
Video Source: [link](https://flic.kr/p/89KYXt)
1212

13-
#### SSD
14-
![Output Sample with SSD][output_video_2]
13+
![Cars with YOLO][cars-yolo-output]
1514

15+
#### Tensorflow-SSD-MobileNet
16+
Video Source: [link](https://flic.kr/p/26WeEWy)
1617

17-
## Install OpenCV
18+
![Cows with tf-SSD][cows-tf-ssd-output]
19+
20+
21+
### Installation
1822
Pip install for OpenCV (version 3.4.3 or later) is available [here](https://pypi.org/project/opencv-python/) and can be done with the following command:
1923

20-
`pip install opencv-contrib-python`
24+
```
25+
pip install numpy matplotlib scipy
26+
pip install opencv-contrib-python
27+
```
2128

22-
## Run with YOLO
29+
Installation of `ipyfilechooser` is recommended if you want to use the jupyter notebooks available in the ```examples``` folder.
30+
```
31+
pip install ipyfilechooser
32+
```
2333

24-
1. Open the terminal
25-
2. Go to `yolo_dir` in this repository: `cd ./yolo_dir`
26-
3. Run: `sudo chmod +x ./get_yolo.sh`
27-
4. Run: `./get_yolo.sh`
34+
```
35+
git clone https://github.com/adipandas/multi-object-tracker
36+
cd multi-object-tracker
37+
pip install -e .
38+
```
39+
40+
### YOLO
41+
42+
Do the following in the terminal:
43+
```
44+
cd ./pretrained_models/yolo_weights
45+
sudo chmod +x ./get_yolo.sh
46+
./get_yolo.sh
47+
```
2848

29-
The model and the config files will be downloaded in `./yolo_dir`. These will be used `tracking-yolo-model.ipynb`.
49+
The above commands will download the model and the config files in `./pretrained_models/yolo_weights`.
50+
These weights are to be used in `examples/tracking-yolo-model.ipynb`.
3051

3152
- The video input can be specified in the cell named `Initiate opencv video capture object` in the notebook.
3253
- To make the source as the webcam, use `video_src=0` else provide the path of the video file (example: `video_src="/path/of/videofile.mp4"`).
3354

34-
Example video used in above demo: https://flic.kr/p/L6qyxj
55+
Example video used in above demo was taken from [here](https://flic.kr/p/L6qyxj)
3556

36-
## Run with TensorFlow SSD model
57+
### TensorFlow model
3758

38-
1. Open the terminal
39-
2. Go to the tensorflow_model_dir: `cd ./tensorflow_model_dir`
40-
3. Run: `sudo chmod +x ./get_ssd_model.sh`
41-
4. Run: `./get_ssd_model.sh`
59+
Do the following in the terminal:
60+
```
61+
cd ./pretrained_models/tensorflow_weights
62+
sudo chmod +x ./get_ssd_model.sh
63+
./get_ssd_model.sh
64+
```
4265

43-
This will download model and config files in `./tensorflow_model_dir`. These will be used `tracking-tensorflow-ssd_mobilenet_v2_coco_2018_03_29.ipynb`.
66+
This will download model and config files in `./pretrained_models/tensorflow_weights`.
67+
These will be used `examples/tracking-tensorflow-ssd_mobilenet_v2_coco_2018_03_29.ipynb`.
4468

4569
**SSD-Mobilenet_v2_coco_2018_03_29** was used for this example.
4670
Other networks can be downloaded and ran: Go through `tracking-tensorflow-ssd_mobilenet_v2_coco_2018_03_29.ipynb` for more details.
4771

4872
- The video input can be specified in the cell named `Initiate opencv video capture object` in the notebook.
4973
- To make the source as the webcam, use `video_src=0` else provide the path of the video file (example: `video_src="/path/of/videofile.mp4"`).
5074

51-
Video used in SSD-Mobilenet multi-object detection and tracking: https://flic.kr/p/26WeEWy
75+
Video used in SSD-Mobilenet multi-object detection and tracking can be found [here](https://flic.kr/p/89KYXt)
5276

53-
## Run with Caffemodel
54-
- You have to use `tracking-caffe-model.ipynb`.
55-
- The model for use is provided in the folder named `caffemodel_dir`.
56-
- The video input can be specified in the cell named `Initiate opencv video capture object` in the notebook.
57-
- To make the source as the webcam, use `video_src=0` else provide the path of the video file (example: `video_src="/path/of/videofile.mp4"`).
77+
### Caffemodel
5878

59-
## References
79+
Do the following in the terminal
80+
```
81+
cd ./pretrained_models/caffemodel_weights
82+
sudo chmod +x ./get_caffemodel.sh
83+
./get_caffemodel.sh
84+
```
85+
86+
This will download model and config files in `./pretrained_models/caffemodel_weights`.
87+
These will be used `examples/tracking-caffe-model-mobilenetSSD.ipynb`.
88+
89+
The caffemodel example provided here also uses MobileNet-SSD model for detection.
90+
91+
### References and Credits
6092
This work is based on the following literature:
6193
1. Bochinski, E., Eiselein, V., & Sikora, T. (2017, August). High-speed tracking-by-detection without using image information. In 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1-6). IEEE. [[paper-pdf](http://elvera.nue.tu-berlin.de/files/1517Bochinski2017.pdf)]
6294
2. Pyimagesearch [link-1](https://www.pyimagesearch.com/2018/07/23/simple-object-tracking-with-opencv/), [link-2](https://www.pyimagesearch.com/2018/11/12/yolo-object-detection-with-opencv/)
@@ -69,8 +101,7 @@ Use the caffemodel zoo from the reference [4,5] mentioned above to vary the CNN
69101

70102
***Suggestion**: If you are looking for speed go for SSD-mobilenet. If you are looking for accurracy and speed go with YOLO. The best way is to train and fine tune your models on your dataset. Although, Faster-RCNN gives more accurate object detections, you will have to compromise on the detection speed as it is slower as compared to YOLO.*
71103

72-
73-
## Citation
104+
### Citation
74105

75106
If you use this repository in your work, please consider citing it with:
76107
```

assets/cars.gif

7.92 MB
Loading

assets/cows.gif

6.74 MB
Loading

assets/sample-output-2.gif

-36.9 MB
Binary file not shown.

assets/sample-output.gif

-23.6 MB
Binary file not shown.

0 commit comments

Comments
 (0)