您好,欢迎访问北京市农林科学院 机构知识库!

Crop plant automatic detecting based on in-field images by lightweight DFU-Net model

文献类型: 外文期刊

作者: Shi, Hui 1 ; Shi, Dongyuan 2 ; Wang, Shengjie 1 ; Li, Wei 6 ; Wen, Haojun 1 ; Deng, Hongtao 1 ;

作者机构: 1.Shihezi Univ, Coll Mech & Elect Engn, Shihezi 832003, Peoples R China

2.Shihezi Univ, Agr Coll, Shihezi 832003, Peoples R China

3.Beijing Acad Agr & Forestry Sci, Minist Agr & Rural Affairs, China Meteorol Adm, Res Ctr Informat Technol,Meteorol Serv Ctr Urban A, Beijing, Peoples R China

4.Minist Agr & Rural Affairs, Key Lab Northwest Agr Equipment, Beijing, Peoples R China

5.Minist Coconstruct Cotton Modernizat Prod Technol, Collaborat Innovat Ctr Prov, Shihezi, Peoples R China

6.Shihezi Univ, Coll Sch Informat Sci & Technol, Shihezi 832003, Peoples R China

关键词: Crop plant detection; Semantic segmentation; Lightweight U-Net; Deep learning; DFU-Net

期刊名称:COMPUTERS AND ELECTRONICS IN AGRICULTURE ( 影响因子:7.7; 五年影响因子:8.4 )

ISSN: 0168-1699

年卷期: 2024 年 217 卷

页码:

收录情况: SCI

摘要: Efficiently and accurately extracting crop plants from authentic and complex field images is essential to ensure the economic efficiency of agricultural production and enhance farming operations. However, many existing deep Convolutional Neural Networks (CNNs) prioritize accuracy over efficiency in crop detection, often lacking real-time capabilities. Therefore, this study aims to investigate the automated detection of various crops, such as cotton plants and seed melons, in natural environments using a U-Net established with a Double-depth convolutional and Fusion block (DFU-Net). This involves exploring compound loss functions for improved accuracy. The DFU-Net model combines the strengths of lightweight CNNs and the U-Net model, constructing a Lightweight Crop with a Double-Depth Convolution backbone (LC-DDC). This is achieved by introducing a DoubleDepth Convolutional block (DDC block) and incorporating initial and fusion blocks at the first and last positions of the encoder, respectively. The effectiveness of these designs is confirmed through ablation experiments and interpretability analyses. The experimental results indicate that the DFU-Net model achieved Pixel Accuracy (PA), mean Intersection over Union (mIoU), and F1 metrics exceeding 92.0 % for the cotton plant, seed melon, and Chloroplast Viability Prediction in Plant Phenotyping (CVPPP) dataset. Particularly, on the CVPPP dataset, DFU-Net outperformed the Pyramid Scene Parsing Network (PSPNet) model, demonstrating 16.4 %, 17.2 %, and 13.2 % higher PA, mIoU, and F1 scores, respectively. In addition, DFU-Net demonstrated efficient model space utilization, requiring only 0.975 MB and 1.68 GB for parameters and floating-point operations (FLOPs), respectively. The performance of low-performance computers is impressive, achieving a detection speed of 10.5 Frames Per Second (FPS), which is 6.9 times faster than that of the Deeplabv3 + model. This achievement demonstrates a balanced compromise between detection speed and accuracy. This study provides novel approaches for optimizing crop-detection algorithms, providing valuable technical support for intelligent crop management.

  • 相关文献
作者其他论文 更多>>