Anybody here fine-tuning LLMs? I’m really curious ...
# 06-technical-discussion
m
Anybody here fine-tuning LLMs? I’m really curious about data ingestion for fine-tuning small LLMs -> phi-2, qlora-mistral Where are you looking at when it comes to dataset preparation?
i
i
What are you trying to achieve from the fine-tuning @Mert Bozkır?
m
I’m gonna setup autogen and Mixture of Experts to get neat results with small models! How does it sound to you?
i
Interesting. But I think models should work out of the box for this. cc/ @Mert Bozkır