您好,欢迎访问中国热带农业科学院 机构知识库!

PromptExplainer: Explaining Language Models through Prompt-based Learning

文献类型: 会议论文

第一作者: Zijian Feng

作者: Zijian Feng 1 ; Hanzhang Zhou 1 ; Zixiao Zhu 1 ; Kezhi Mao 2 ;

作者机构: 1.Institute of Catastrophe Risk Management, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore##Future Resilient Systems Programme, Singapore-ETH Centre, CREATE campus, Singapore

2.School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore##Future Resilient Systems Programme, Singapore-ETH Centre, CREATE campus, Singapore

会议名称: Conference of the European Chapter of the Association for Computational Linguistics

主办单位:

页码: 882-895

摘要: Pretrained language models have become workhorses for various natural language processing (NLP) tasks, sparking a growing demand for enhanced interpretability and transparency. However, prevailing explanation methods, such as attention-based and gradient-based strategies, largely rely on linear approximations, potentially causing inaccuracies such as accentuating irrelevant input tokens. To mitigate the issue, we develop PromptExplainer, a novel method for explaining language models through prompt-based learning. Prompt-Explainer aligns the explanation process with the masked language modeling (MLM) task of pretrained language models and leverages the prompt-based learning framework for explanation generation. It disentangles token representations into the explainable embedding space using the MLM head and extracts discriminative features with a verbalizer to generate class-dependent explanations. Extensive experiments demonstrate that PromptExplainer significantly outperforms state-of-the-art explanation methods.

分类号: tp311

  • 相关文献
作者其他论文 更多>>