Using Python and Scikit Learn to Count Crops in Drone Orthophotos

Posted by


Geospatial crop counting is a valuable technique that allows farmers to monitor the health and growth of their crops without the need for labor-intensive manual counting. By using drone orthophotos and Python’s Scikit Learn library, you can automate the process of counting crops in a specific region with high accuracy and efficiency. In this tutorial, we will walk you through the steps of geospatial crop counting from drone orthophotos using Python and Scikit Learn.

Step 1: Collecting Drone Orthophotos

The first step in geospatial crop counting is to collect drone orthophotos of the area you want to analyze. Drone orthophotos are high-resolution aerial images that have been geo-referenced and corrected for distortion, making them ideal for accurately identifying and counting crops. Make sure to collect drone orthophotos at regular intervals throughout the growing season to track the progress of the crops.

Step 2: Preprocessing Drone Orthophotos

Before we can start counting crops, we need to preprocess the drone orthophotos to extract important features and reduce noise. This can be done using Python libraries such as OpenCV and NumPy. First, import the necessary libraries:

import cv2
import numpy as np

Next, load the drone orthophoto using the cv2.imread() function and convert it to grayscale:

image = cv2.imread('drone_orthophoto.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

Now, we can apply image processing techniques such as thresholding, noise removal, and edge detection to enhance the quality of the image and make it easier to identify crops. This step may require some experimentation to find the optimal parameters for your specific dataset.

Step 3: Feature Extraction and Segmentation

Once the drone orthophoto has been preprocessed, we can extract features and segment the image to isolate the crops from the background. This can be done using machine learning algorithms such as K-means clustering or image segmentation techniques like watershed segmentation. In this tutorial, we will use the Skimage library for image segmentation:

from skimage.segmentation import watershed
from skimage.feature import peak_local_max
from scipy import ndimage

# Perform image segmentation using watershed
markers = ndimage.label(thresholded_image)[0]
labels = watershed(thresholded_image, markers)

After segmenting the image, we can use the labeled regions to identify and count the individual crops in the orthophoto. This can be done by iterating through the labeled regions and counting the number of connected components:

num_crops = len(np.unique(labels)) - 1

Step 4: Validation and Post-Processing

Finally, we need to validate the results of the crop counting algorithm and post-process the data to ensure accuracy. One common technique is to compare the automated crop counts with ground truth data obtained through manual counting or other methods. This can help identify any discrepancies and refine the algorithm to improve accuracy.

In addition, it may be necessary to perform additional post-processing steps such as noise removal, filtering out false positives, and refining the crop boundaries. This can be done using morphological operations, contour detection, and other image processing techniques.

Step 5: Visualization and Reporting

Once the crop counting algorithm has been validated and post-processed, we can visualize the results and generate reports to communicate the findings to stakeholders. This can be done using Python libraries such as Matplotlib and Seaborn to create visualizations such as histograms, bar charts, and heatmaps.

import matplotlib.pyplot as plt

# Create a bar chart of crop counts
plt.bar(['Crops'], [num_crops])
plt.xlabel('Type of Crops')
plt.ylabel('Number of Crops')
plt.title('Crop Counting Results')
plt.show()

In conclusion, geospatial crop counting from drone orthophotos with Python and Scikit Learn is a powerful technique that can help farmers monitor the health and growth of their crops with high accuracy and efficiency. By following the steps outlined in this tutorial, you can automate the process of crop counting and make informed decisions to optimize agricultural productivity.

0 0 votes
Article Rating
14 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@balamacab
1 month ago

Any tutorial on how to setup the enviroment?

@dzulfansyah09
1 month ago

is it possible to use more than 1 raster data at the same time for model training?

@darrelllim2372
1 month ago

I tried but getting an error when running #extract point value from raster. Any idea how i can fix it??
—————————————————————————

NotImplementedError Traceback (most recent call last)

Input In [5], in <cell line: 3>()

2 surveyRowCol = []

3 for index, values in pointData.iterrows():

—-> 4 x = values['geometry'].xy[0][0]

5 y = values['geometry'].xy[1][0]

6 row, col = palmRaster.index(x,y)

File ~anaconda3envsgeo_envlibsite-packagesshapelygeometrybase.py:349, in BaseGeometry.xy(self)

346 @property

347 def xy(self):

348 """Separate arrays of X and Y coordinate values"""

–> 349 raise NotImplementedError

NotImplementedError:

@seanreilly6618
1 month ago

Thank you sir.

@gbirijoshua396
1 month ago

Anyone tried this out yet ?

@ecotransconstructionheavym8303
1 month ago

Hi sir can I have ur email address please

@islandmonkey87
1 month ago

Thank you!

@ninansajeethphilip4656
1 month ago

Nice presentation

@seraffy7766
1 month ago

Hi, this video is very interesting, now i stuck to export csv file to local directory and get the error np.savetxt(' C:UsersDEVDesktopvalues.csv', birchPoint, delimiter=",") File "C:UsersDEVAppDataLocalTemp/ipykernel_2736/2159543175.py", line 1

np.savetxt(' C:UsersDEVDesktopvalues.csv', birchPoint, delimiter=",")

^

SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 3-4: truncated UXXXXXXXX escape Thanks a lot for help

@DiegoChinchilla
1 month ago

Thanks a lot I'll try it for coffee plants

@plarepi8341
1 month ago

Hello, I see you use the same dataset for the tutorial to make the ortomosaic.
Is there a way for you to share the original drone dataset?
Thanks

@markbrown6623
1 month ago

Do you think that this would work for trees with WorldView 2 data or does it only really work with drone data?

@Arhangheluls
1 month ago

This is a very interesting video tutorial !
Is there a way to apply first an image segmentation, similar to Object-based Image Analysis (OBIA), and then classify each object (polygon) using the methodology you presented in your video ? Because, i thing this combination between image segmentation and object classification using template matching might improve the extraction for plant recognition or any other land-cover type.
I have no previous experience working with machine learning & deep learning algorithms in Python, so please give me some tutorial links, if you consider necessary, so that i can better understand the procedures. Thank you !

@hasansalihkulunk1227
1 month ago

Awesome work!