Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fine-Tuning or Retrieval – Microsoft (arxiv.org)
2 points by shenli3514 on Feb 7, 2024 | hide | past | favorite | 1 comment


Our findings reveal that while unsupervised fine-tuning offers some improvement, RAG consistently outperforms it, both for existing knowledge encountered during training and entirely new knowledge. Moreover, we find that LLMs struggle to learn new factual information through unsupervised fine-tuning, and that exposing them to numerous variations of the same fact during training could alleviate this problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: