Has anyone taken the plunge on fine tuning any of ...
# 06-technical-discussion
c
Has anyone taken the plunge on fine tuning any of the LLaMA 2 models? Working my way through that using the Huggingface “autotrainer” … and have many questions.
👀 1
@umar ıgan All the examples I’m seeing are for QA fine tuning… is it possible to fine tune with general knowledge, or is the idea you embed facts in answers, etc?
a
I’m starting to go down this rabbit hole. I want to train it on a general set of documents. will share my findings as i go
u
Not necessarily @Clay Smith the reason of fine tuning on QA dataset is to make a domain specific LLM. here is an example of more generalist approach; https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g?usp=sharing