You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
VS Code (in order to run the jupyter notebook with python file: VS Code format)
File structure:
yolo: Contains the core library of the algorithm.
__init__.py
const.py: Storing hyper params.
loss.py: Collection of loss functions
model.py: Collection of YOLO model and base conv model.
utils.py: Utillities methods.
main_card_ds.py: This is the jupyter notebook (vscode format) to train the YOLO v1 algorithm on the solitare card detection dataset.
main_coco_ds.py: This is the jupyter notebook (vscode format) to train the YOLO v1 algorithm on the coco object detection dataset.
main_coco_ds.ipynb: This is the jupyter notebook to train the YOLO v1 algorithm on the coco object detection dataset on Google Colab (Training elsewhere will not work as expected).
weights: contains the weights file
checkpoint9: The yolo v3 model with customization. Train for 2 epochs and 20000 step each epoch, with batch size 4.
checkpoint10: The yolo v3 model with customization. Train for 4 epochs and 20000 step each epoch, with batch size 4.
Here is the intermediary outputs in the convolution layers:
Reference:
@INPROCEEDINGS{7780460,
author={Redmon, Joseph and Divvala, Santosh and Girshick, Ross and Farhadi, Ali},
booktitle={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
title={You Only Look Once: Unified, Real-Time Object Detection},
year={2016},
volume={},
number={},
pages={779-788},
doi={10.1109/CVPR.2016.91}}