dc.description |
Cotton harvesting is performed by expensive combine harvesters that hinder small to medium-size cotton farmers
Advances in robotics provide an opportunity to harvest cotton using small and robust autonomous rovers that can
be deployed in the field as an “army” of harvesters. This paradigm shift in cotton harvesting requires high accuracy
3D measurement of the cotton boll position under field conditions. This in-field high throughput phenotyping of
cotton boll position includes real-time image acquisition, depth processing, color segmentation, features extraction
and determination of cotton boll position. In this study, a 3D camera system was mounted on a research rover at
82° below the horizontal and took 720p images at the rate of 15 frames per second while the rover was moving
over 2-rows of potted defoliated cotton plants. The software development kit provided by the camera manufacturer
was installed and used to process and provide a disparity map of cotton bolls. The system was installed with the
Robot Operating System (ROS) to provide live image frames to client computers wirelessly and in real time.
Cotton boll distances from the ground were determined using a 4-step machine vision algorithm (depth processing,
color segmentation, feature extraction and frame matching for position determination). The 3D camera used
provided distance of the boll from the left lens and algorithms were developed to provide vertical distance from
the ground and horizontal distance from the rover. Comparing the cotton boll distance above the ground with
manual measurements, the system achieved an average R2 value of 99% with 9 mm RMSE when stationary and
95% with 34 mm RMSE when moving at approximately 0.64 km/h. This level of accuracy is favourable for
proceeding to the next step of simultaneous localization and mapping of cotton bolls and robotic harvesting. |
|