You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added a new v2_transform() method to replace torch.cat + nn.Conv2d combinations with CatConv2d, an all-in-1 fused cuda kernel combining Concat and Conv2d.
Inference on TitanV speedup from 70 fps to 99 fps
CatConv2d Installation
cd CatConv2d/
python setup.py install
(Please note that backward path for CatConv2d hasn't been implemented)
Fully Convolutional HarDNet for Segmentation in Pytorch
python train.py [-h] [--config [CONFIG]]
--config Configuration file to use (default: hardnet.yml)
To validate the model :
usage: validate.py [-h] [--config [CONFIG]] [--model_path [MODEL_PATH]] [--save_image]
[--eval_flip] [--measure_time]
--config Config file to be used
--model_path Path to the saved model
--eval_flip Enable evaluation with flipped image | False by default
--measure_time Enable evaluation with time (fps) measurement | True by default
--save_image Enable writing result images to out_rgb (pred label blended images) and out_predID
Pretrained Weights
Cityscapes pretrained weights: Download (Val mIoU: 77.7, Test mIoU: 75.9)
Cityscapes pretrained with color jitter augmentation: Download (Val mIoU: 77.4, Test mIoU: 76.0)