Collaborative Optimization of Model Pruning and Knowledge Distillation for Efficient and Lightweight Multi-Behavior Recognition in Piglets

文献类型: 外文期刊

第一作者: Luo, Yizhi

作者: Luo, Yizhi;Lin, Kai;Chen, Yuankai;Xiao, Deqin;Luo, Yizhi;Yang, Chen;Luo, Yizhi;Xiao, Zixuan

作者机构:

关键词: piglet; multi-behavior recognition; prune; distill; precision livestock farming

期刊名称:ANIMALS ( 影响因子:2.7; 五年影响因子:3.2 )

ISSN: 2076-2615

年卷期: 2025 年 15 卷 11 期

页码:

收录情况: SCI

摘要: In modern intensive pig farming, accurately monitoring piglet behavior is crucial for health management and improving production efficiency. However, the complexity of existing models demands high computational resources, limiting the application of piglet behavior recognition in farming environments. In this study, the piglet multi-behavior-recognition approach is divided into three stages. In the first stage, the LAMP pruning algorithm is used to prune and optimize redundant channels, resulting in the lightweight YOLOv8-Prune. In the second stage, based on YOLOv8, the AIFI module and the Gather-Distribute mechanism are incorporated, resulting in YOLOv8-GDA. In the third stage, using YOLOv8-GDA as the teacher model and YOLOv8-Prune as the student model, knowledge distillation is employed to further enhance detection accuracy, thus obtaining the YOLOv8-Piglet model, which strikes a balance between the detection accuracy and speed. Compared to the baseline model, YOLOv8-Piglet significantly reduces model complexity while improving detection performance, with a 6.3% increase in precision, 11.2% increase in recall, and an mAP@0.5 of 91.8%. The model was deployed on the NVIDIA Jetson Orin NX edge computing platform for the evaluation. The average inference time was reduced from 353.9 ms to 163.2 ms, resulting in a 53.8% reduction in the processing time. This study achieves a balance between model compression and recognition accuracy through the collaborative optimization of pruning and knowledge extraction.

分类号:

  • 相关文献
作者其他论文 更多>>