You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.
Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.
Below are the steps required for automatic identification of animals within camera trap images or videos.
1. File Manifest
First, build the file manifest of a given directory.
library(animl)
imagedir<-"examples/TestData"#create save-file placeholders and working directories
WorkingDirectory(imagedir,globalenv())
# Read exif data for all images within base directoryfiles<- build_file_manifest(imagedir, out_file=filemanifest, exif=TRUE)
# Process videos, extract frames for IDallframes<- extract_frames(files, out_dir=vidfdir, out_file=imageframes,
frames=2, parallel=T, workers=parallel::detectCores())
2. Object Detection
This produces a dataframe of images, including frames taken from any videos to be fed into the classifier. The authors recommend a two-step approach using Microsoft's 'MegaDector' object detector to first identify potential animals and then using a second classification model trained on the species of interest.
#Load the Megadetector modelmd_py<- megadetector("/mnt/machinelearning/megaDetector/md_v5a.0.0.pt")
# Obtain crop information for each imagemdraw<- detect_MD_batch(md_py, allframes)
# Add crop information to dataframemdresults<- parse_MD(mdraw, manifest=allframes, out_file=detections)
3. Classification
Then feed the crops into the classifier. We recommend only classifying crops identified by MD as animals.
# Pull out animal cropsanimals<- get_animals(mdresults)
# Set of crops with MD human, vehicle and empty MD predictions. empty<- get_empty(mdresults)
model_file<-"/Models/Southwest/v3/southwest_v3.pt"class_list<-"/Models/Southwest/v3/southwest_v3_classes.csv"# load the modelsouthwest<- load_model(model_file, class_list)
# obtain species predictionsanimals<- predict_species(animals, southwest[[1]], southwest[[2]], raw=FALSE)
# recombine animal detections with remaining detectionsmanifest<- rbind(animals,empty)
If your data includes videos or sequences, we recommend using the sequenceClassification algorithm.
This requires the raw output of the prediction algorithm.
We recommend running animl on a computer with a dedicated GPU.
Animl also depends on exiftool for accessing file metadata.
Python
animl depends on python and will install python package dependencies if they are not available if installed via CRAN.
However, we recommend setting up a conda environment using the provided config file.
The R version of animl depends on the python version to handle the machine learning:
animl-py
Next, install animl-py in your preferred python environment (such as conda) using pip:
pip install animl
Animl-r can be installed through CRAN:
install.packages('animl')
Animl-r can also be installed by downloading this repo, opening the animl.Rproj file in RStudio and selecting Build -> Install Package.
Contributors
Kyra Swanson
Mathias Tobler
Edgar Navarro
Josh Kessler
Jon Kohler
About
Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.