Project Proposal: Robust Fine-Tuning Techniques for NLP Applications
This project aims to explore and implement robust fine-tuning techniques for large pre-trained NLP models, focusing on improving domain-specific performance while preserving out-of-domain robustness. By leveraging methods such as weight-space ensembling (WiSE-FT), the project will address challenges in distribution shift and generalization, with applications in tasks like sentiment analysis. The proposed solution promises to balance high accuracy on target datasets and robustness to unseen data, providing practical insights for advanced NLP systems. Shirel Goldenberg & Kfir Hemo
This is from 2021, it is less relevant for 2025. Find a more relevant updated document about LLM fine tuning and I approve the topic.