How to fine-tune LLMs using Unsloth? (text2cypher use case)

2024 ж. 22 Мам.
796 Рет қаралды

This tutorial covered three main topics:
1. Explanation of PEFT and LoRA as fine-tuning methods.
2. How to prepare/generate our dataset used for fine-tuning?
3. How to fine-tune our model using Unsloth?
How to use few shot prompting: • The easiest way to cha...
How to Convert any Text into a Knowledge Graph: • Convert any Text Data ...
Chapter:
00:00:00 - Intro
00:01:12 - How to prepare a dataset for fine tuning?
00:03:35 - Fine tuning method ( explanation of PEFT & LoRA)
00:05:41- Initial set up
00:06:20 - First experiment
00:07:47 - Second experiment ( adding enhanced_schema & validate_cypher parameter)
00:09:10 - Unsloth set up
00:09:43 - Converting raw data from csv to a proper format
00:10:30 - Fine tuning the model
00:11:03 - Inference using fine tuned model
00:11:27 - Save model (LoRA weight) to Hugging Face
00:12:02 - Load saved model from Hugging Face
00:12:26 - Final Evaluation
00:14:09 - Outro
Everything you'll need:
Github repo: github.com/projectwilsen/Know...
Colab: colab.research.google.com/dri...
Generate text2cypher dataset source code: github.com/tomasonjo/text2cypher
Unsloth:unsloth.ai/
Unsloth Github repo: github.com/unslothai/unsloth
References
LoRA paper: arxiv.org/pdf/2106.09685
#finetuning #unsloth #llm #llama3 #knowledgegraph

Пікірлер
  • Another interesting approach is to use a chat prompt template by Tomaz Bratanic. Check it out here: github.com/neo4j-labs/text2cypher/blob/main/finetuning/unsloth-llama3/llama3_text2cypher_chat.ipynb

    @geralduswilsen@geralduswilsen4 күн бұрын
KZhead