您好,欢迎访问中国热带农业科学院 机构知识库!

Improving large language models for miRNA information extraction via prompt engineering

文献类型: 外文期刊

作者: Wu, Rongrong 1 ; Zong, Hui 1 ; Wu, Erman 1 ; Li, Jiakun 1 ; Zhou, Yi 1 ; Zhang, Chi 1 ; Zhang, Yingbo 1 ; Wang, Jiao 1 ; Tang, Tong 1 ; Shen, Bairong 1 ;

作者机构: 1.Sichuan Univ, West China Hosp, Dept Urol, Chengdu, Peoples R China

2.Sichuan Univ, West China Hosp, Inst Syst Genet, Frontiers Sci Ctr Dis Related Mol Network, Chengdu, Peoples R China

3.Soochow Univ, Affiliated Hosp 1, Operat Management Dept, Suzhou, Peoples R China

4.Xinjiang Med Univ, Affiliated Hosp 1, Dept Neurosurg, Urumqi, Peoples R China

5.Sichuan Univ, West China Hosp, Dept Crit Care Med, Joint Lab Artificial Intelligence Crit Care Med, Chengdu, Peoples R China

6.Chinese Acad Trop Agr Sci, Trop Crops Genet Resources Inst, Haikou, Peoples R China

7.Sichuan Univ, West China Tianfu Hosp, Chengdu, Sichuan, Peoples R China

关键词: MicroRNA; Cancer; Large language models; Information extraction; Datasets; Prompt engineering

期刊名称:COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE ( 影响因子:4.8; 五年影响因子:5.4 )

ISSN: 0169-2607

年卷期: 2025 年 271 卷

页码:

收录情况: SCI

摘要: Objective: Large language models (LLMs) demonstrate significant potential in biomedical knowledge discovery, yet their performance in extracting fine-grained biological information, such as miRNA, remains insufficiently explored. Accurate extraction of miRNA-related information is essential for understanding disease mechanisms and identifying biomarkers. This study aims to comprehensively evaluate the capabilities of LLMs in miRNA information extraction through diverse prompt learning strategies. Methods: Three high-quality miRNA information extraction datasets were constructed to support the benchmarking and training of generative LLMs, specifically Re-Tex, Re-miR and miR-Cancer. These datasets encompass three types of entities: miRNAs, genes, and diseases, along with their relationships. The accuracy and reliability of three LLMs, including GPT-4o, Gemini, and Claude, were evaluated and compared with traditional models. Different prompt engineering strategies were implemented to enhance the LLMs' performance, including baseline prompts, 5-shot Chain of Thought prompts, and generated knowledge prompts. Results: The combination of optimized prompt strategies significantly improved overall entity extraction performance across both trained and untrained datasets. Generated knowledge prompting achieved the highest performance, with maximum F1 scores of 76.6 % for entity extraction and 54.8 % for relationship extraction. Comparative analysis indicated GPT-4o exhibited superior performance to Gemini, while Claude showed the lowest performance levels. Extraction accuracy varied considerably across entity types, with miRNA recognition achieving the highest performance and gene/protein identification demonstrating the lowest accuracy levels. Furthermore, binary relationship extraction accuracy was significantly lower than entity extraction performance. The three evaluated LLMs showed similarly limited capability in relationship extraction tasks, with no statistically significant differences observed between models. Finally, comparison with conventional computational methods revealed LLMs have not yet exceeded traditional methods in this specialized domain. Conclusion: This study established high-quality miRNA datasets to support information extraction and knowledge discovery. The overall performance of LLMs in this study proved limited, and challenges remain in processing miRNA-related information extraction. However, optimized prompt combinations can substantially improve performance. Future work should focus on further refinement of LLMs to accelerate the discovery and application of potential diagnostic and therapeutic targets.

  • 相关文献
作者其他论文 更多>>