You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sensor Independent Cloud and Shadow Masking with Partial Labels and Multimodal Inputs - Paper here
Installation
Before installing SEnSeIv2, make sure you have a working installation of PyTorch and torchvision (with CUDA drivers, if you want to use a GPU). More info on installing pytorch here. This is not included in the requirements.txt file.
To install SEnSeIv2, with functionality as both a python package and a command line tool:
git clone git@github.com:aliFrancis/SEnSeIv2.git
cd ./SEnSeIv2
python setup.py install
This will not download the model weights from the SEnSeIv2 HuggingFace model repo. However, when first used, the weights should be downloaded automatically.
Basic Usage
Command line interface
As an example, you can produce a cloud mask (with classes clear, thin, thick and cloud shadow) for a Sentinel-2 scene
To see all options for the command line tool, which, for instance, allow you to control parameters such as the class structure of the mask, or its resolution, you can use:
senseiv2 --help
In Python
See this notebook for a more complete overview of how to use the cloud mask in python.
You can use the cloud masks within python, if you are doing your own data preprocessing, or want to customise things in other ways. A typical use-case might begin with:
fromsenseiv2.inferenceimportCloudMaskfromsenseiv2.utilsimportget_model_filesscene= ... # Some numpy array representing a satellite imagedescriptors= [
{...}, # See senseiv2/constants.py for examples
{...}
]
# Pick pre-trained model from https://huggingface.co/aliFrancis/SEnSeIv2model_name='SEnSeIv2-SegFormerB2-alldata-ambiguous'config, weights=get_model_files(model_name)
# Lots of options in the kwargs for different settingscm=CloudMask(config, weights,verbose=True)
mask=cm(scene,descriptors=descriptors)
Advanced uses (model training etc.)
It is not easy to replicate precisely the training strategy used here, because it is not possible to redistribute all the datasets used. However, the train.py script is included, along with some sample data in samples/ to get you started. All the data used has been preprocessed into a shared format using the eo4ai tool.
After collecting some of the datasets, you can use the training script with a config file (with some modifications) from the HuggingFace repo:
python train.py path/to/config.yaml
About
Sensor Independent Cloud and Shadow Masking with Ambiguous Labels and Multimodal Inputs