Client: Environment Canterbury
Environment Canterbury (ECan) have been gathering photographic data of ground cover across grazed and destocked hill and high-country farmland in Canterbury continuously since the late 1970s. This is part of a long-term programme called SPT (Stereo Photo Transects) which is designed to monitor the effects of soil and water conservation plans over time.
One of the study objectives was to determine whether destocking resulted in a decrease in bare ground. ECan also put sites on grazed land that were similar to the destocked land to assess this management difference. These sites consist of transects running across a landscape with ten photo points taken with a camera mounted on a tripod and pointed vertically downwards, with a field of view covering a few square meters of ground.
Lynker Analytics was commissioned to develop a convolutional neural network (CNN) semantic segmentation model to automate the classification of each photo into its fractional ground cover to enable scientific interpretation.
From around 10,000 images, approximately 1,000 photos were partially labelled by human annotators into four ground cover classes: Living, Dead, Rock, Soil. The training data set included one thousand high resolution images, each with one thousand random point assessments (RPA) thus creating a training data resource of 1,000,000 labels covering a wide range of cameras, years, image resolutions and lighting conditions.
This diversity helps ensure robust learning and a generalized model that should perform well in different conditions. To further help with model generalisation we also applied random transformations to the training imagery including:
up-down and left-right reflections.
brightness reduction or increase
contrast reduction or increase.
hue reduction or increase (effectively shifting the colour)
saturation reduction or increase (changing the colourfulness)
We use the DeepLabV3+ CNN architecture for semantic segmentation which is a well-known and efficient CNN for semantic segmentation that has demonstrated high accuracy on land cover tasks.
Example images, training points, and inference are shown below. Red = Dead, Green = Living, White = Rock, Blue = Soil.
The models developed as part of this research allowed for automated inference across the whole photo archive and consistent classifications across time.
The output of the model inference gives us coverage by site and camera position for the four classes. A benefit of this inference is the ability to model ground cover over time at a particular site. Taking one photo site as an example, the trends over time and comparison of RPA and machine learning predictions for two of the four classes can be seen by plotting the fraction of ground cover per class over time. We see a good correspondence between the inferred ground cover from the machine learning process and the estimated ground cover from the human labelled RPAs.
The model has shown good results (equivalent or better than the human annotations) on the RPA classified subset of images from the Transect photo archive and demonstrated the usefulness of sparse point-based training. Application of these models has increased the number of classified photos from 1,000 to the full photo archive. This allows the SPT project a more complete analysis of ground cover trends across all sites over time and a consistent and automated ground cover detection process for future years. The model and code was provided to ECan enabling them to run the models on photos in future years either with Lynker’s help or independently.
Deliverables:
Ground cover fractions and statistics in csv format.
Rescaled photos
ML output images with class label pixel values
ML output images with classes depicted by visible colour.·
This work was also published at the 38th International Conference on Image and Vision Computing New Zealand (IVCNZ) – citation below.
D. Knox, B. Xue, M. Zhang and J. Cuff, "Measuring Ground Cover in Long Term Hill Country Photography using Weakly Supervised Convolutional Neural Networks," 2023 38th International Conference on Image and Vision Computing New Zealand (IVCNZ), Palmerston North, New Zealand, 2023, pp. 1-6, doi: 10.1109/IVCNZ61134.2023.10343908.