08/13/2023, 11:32 PM
Has anyone taken the plunge on fine tuning any of the LLaMA 2 models? Working my way through that using the
… and have many questions.
should have searched first!
All the examples I’m seeing are for QA fine tuning… is it possible to fine tune with general knowledge, or is the idea you embed facts in answers, etc?
08/14/2023, 1:18 AM
I’m starting to go down this rabbit hole. I want to train it on a general set of documents. will share my findings as i go
08/14/2023, 8:01 AM
the reason of fine tuning on QA dataset is to make a domain specific LLM. here is an example of more generalist approach;