# 06-technical-discussion

Clay Smith

08/13/2023, 11:32 PM
Has anyone taken the plunge on fine tuning any of the LLaMA 2 models? Working my way through that using the Huggingface “autotrainer” … and have many questions.
👀 1
@umar ıgan All the examples I’m seeing are for QA fine tuning… is it possible to fine tune with general knowledge, or is the idea you embed facts in answers, etc?

Ankush Agarwal

08/14/2023, 1:18 AM
I’m starting to go down this rabbit hole. I want to train it on a general set of documents. will share my findings as i go

umar ıgan

08/14/2023, 8:01 AM
Not necessarily @Clay Smith the reason of fine tuning on QA dataset is to make a domain specific LLM. here is an example of more generalist approach;