https://cerebralvalley.ai logo
#06-technical-discussion
Title
# 06-technical-discussion
c

Clay Smith

08/13/2023, 11:32 PM
Has anyone taken the plunge on fine tuning any of the LLaMA 2 models? Working my way through that using the Huggingface “autotrainer” … and have many questions.
👀 1
@umar ıgan All the examples I’m seeing are for QA fine tuning… is it possible to fine tune with general knowledge, or is the idea you embed facts in answers, etc?
a

Ankush Agarwal

08/14/2023, 1:18 AM
I’m starting to go down this rabbit hole. I want to train it on a general set of documents. will share my findings as i go
u

umar ıgan

08/14/2023, 8:01 AM
Not necessarily @Clay Smith the reason of fine tuning on QA dataset is to make a domain specific LLM. here is an example of more generalist approach; https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g?usp=sharing