Generating Diverse Agricultural Data for Vision-Based Farming Applications

Mikolaj Cieslak1,3 Umabharathi Govindarajan2 Alejandro Garcia1 Anuradha Chandrashekar2 Torsten Hädrich1 Aleksander Mendoza-Drosik1 Dominik L. Michels1,4 Sören Pirk1,5 Chia-Chun Fu2 Wojciech Pałubicki1,6

1 GreenMatterAI GmbH
2 Blue River Technology
3 University of Calgary, Canada
4 King Abdullah University of Science and Technology, Saudi Arabia, and Technical University of Darmstadt, Germany
5 Christian-Albrecht University of Kiel, Germany
6 Adam Mickiewicz University in Poznań, Poland

Abstract

We present a specialized procedural model for generating synthetic agricultural scenes, focusing on soybean crops, along with various weeds. The model simulates distinct growth stages of these plants, diverse soil conditions, and randomized field arrangements under varying lighting conditions. The integration of real-world textures and environmental factors into the procedural generation process enhances the photorealism and applicability of the synthetic data. We validate our model's effectiveness by comparing the synthetic data against real agricultural images, demonstrating its potential to significantly augment training data for machine learning models in agriculture. This approach not only provides a cost-effective solution for generating high-quality, diverse data but also addresses specific needs in agricultural vision tasks that are not fully covered by general-purpose models.

Reference

Mikolaj Cieslak, Umabharathi Govindarajan, Alejandro Garcia, Anuradha Chandrashekar, Torsten Hädrich, Aleksander Mendoza-Drosik, Dominik L. Michels, Sören Pirk, Chia-Chun Fu, and Wojciech Pałubicki. Generating Diverse Agricultural Data for Vision-Based Farming Applications. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop: Vision for Agriculture, 2024.

Download the paper here (2.2 Mb)