P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation
In the peak thca vape field of natural language processing (NLP), prompt-based learning is widely used for efficient parameter learning.However, this method has the drawback of shortening the input length by the extent of the attached prompt, leading to the inefficient utilization of the input space.In this study, we propose P-Distill, a novel prom