Join Slack
Powered by
Anybody here fine-tuning LLMs? I’m really curious ...
# 06-technical-discussion
m
Mert Bozkır
01/10/2024, 3:25 AM
Anybody here fine-tuning LLMs? I’m really curious about data ingestion for fine-tuning small LLMs -> phi-2, qlora-mistral Where are you looking at when it comes to dataset preparation?
i
Ivan Porollo
01/10/2024, 3:34 AM
maybe this helps?
https://github.com/OpenAccess-AI-Collective/axolotl
i
Ishita Jindal
01/10/2024, 10:24 AM
What are you trying to achieve from the fine-tuning
@Mert Bozkır
?
m
Mert Bozkır
01/10/2024, 1:34 PM
I’m gonna setup autogen and Mixture of Experts to get neat results with small models! How does it sound to you?
i
Ishita Jindal
01/11/2024, 1:01 PM
Interesting. But I think models should work out of the box for this. cc/
@Mert Bozkır
2
Views
Open in Slack
Previous
Next