Efficient fine-tuning of small-parameter large language models for biomedical bilingual multi-task applications
Published in Applied Soft Computing, 2025
Abstract
The escalating computational costs of large language models (LLMs) have catalyzed the pursuit of more efficient alternatives, particularly in specialized domains like biomedicine.[1] This study introduces a novel approach for fine-tuning small-parameter LLMs to address biomedical bilingual (Chinese-English) multi-task challenges. By employing an efficient fine-tuning strategy, we demonstrate that small-parameter models can achieve competitive performance comparable to larger counterparts while significantly reducing resource consumption.[1] The proposed method effectively captures domain-specific knowledge and enhances generalization capabilities across various biomedical tasks, offering a viable solution for resource-constrained environments.
Highlights
- Proposed an efficient fine-tuning framework tailored for small-parameter LLMs in the biomedical domain.
- Achieved superior performance in bilingual (Chinese-English) multi-task scenarios, validating the model’s robustness.
- Demonstrated that strategically fine-tuned small models can rival larger LLMs, significantly lowering deployment costs and latency.
Publication Details
- DOI: 10.1016/j.asoc.2025.113084
- Journal Impact Factor (IF): 6.6
- CAS Ranking: 中科院二区 Top
- Authors: Yinghong Li#, Yudong Yan#, Zhuohao Tong, Yu Wang, Yinqi Yang, Mingze Bai, Dan Pu, Jiazheng Xie, Chuan Liu, Bo Li, Mingwei Liu, Kunxian Shu*
Recommended citation: Li, Y., Yan, Y., Tong, Z., Wang, Y., Yang, Y., Bai, M., Pu, D., Xie, J., Liu, C., Li, B., Liu, M., & Shu, K. (2025). "Efficient fine-tuning of small-parameter large language models for biomedical bilingual multi-task applications." Applied Soft Computing, 175, 113084.
Download Paper