SEARCH BY

All

  • All
  • Java8
  • Spring

RAG vs. Fine-Tuning

Post a Comment

RAG

  • Portable across LLMs
  • Easy to remove training data if any of it becomes obsolete
  • Much cheaper to run than fine-tuning
  • More future-proof -- if a better LLM comes out, you can just swap it out


Fine-Tuning

  • Good if you need to minimize tokens in the prompt
  • Slow to get started
  • Expensive to train and run (generally)

Advantages of RAG over Pre-trained or Fine-tuned LLMs

RAG has distinct advantages over pre-trained or fine-tuned LLMs.

Pre-training involves training an LLM from scratch using a large dataset. While this allows for extensive customization, it requires significant resources and time investment.

Fine-tuning adapts pre-trained models to new tasks or domains with specialized datasets. Although more resource-efficient than pre-training, fine-tuning still demands considerable GPU resources and can be challenging. It may inadvertently cause the LLM to forget previously learned information or reduce its proficiency.

RAG, on the other hand, augments publicly available data from LLMs with domain-specific data from the enterprise. This allows for parsing and inferencing with context at prompt time. Additionally, post-processing in a RAG system verifies generated responses, minimizing the risk of inaccuracies or false information from the LLM.

RAG has emerged as a common pattern for GenAI, extending the power of LLMs to domain-specific datasets without the need for retraining models.
jaya
I love software designing, coding, writing and teaching...

Share this article

Related Posts

Post a Comment

Place your code in <em> </em> tags to send in comments | Do not add hyper links in commnet

Close

Subscribe our newsletter for the latest articles directly into your email inbox