The paper "DoctorGLM: Fine-tuning your Chinese Doctor is not a Herculean Task" explores the development of a bilingual healthcare-focused large language model (LLM), optimized for Chinese and English, using cost-effective fine-tuning methods. By leveraging ChatGLM-6B and efficient techniques like LoRA and INT4 quantization, it demonstrates how medical dialogue models can be trained affordably and deployed on limited hardware. The project aims to enhance accessibility to specialized medical language models and invites collaboration to improve its practical capabilities and reliability. Shirel Goldenberg & Kfir Hemo
top of page
bottom of page