This guide walks you through how to fine-tune Gemma on a custom text-to-sql dataset using Hugging Face Transformers and TRL.
Chapters:
00:00 - Introduction
00:34- What is Quantized Low-Rank Adaptation (QLoRA)
1:08 - Setup development environment
2:06 - Create and prepare the fine-tuning dataset
4:32 - Fine-tune Gemma using TRL and the SFTTrainer
Resources:
Fine-Tune Gemma using Hugging Face Transformers and QloRA → https://goo.gle/4jVpjjj
Subscribe to Google for Developers → https://goo.gle/developers
Speaker: Philipp Schmid
Products mentioned: Gemma
Chapters:
00:00 - Introduction
00:34- What is Quantized Low-Rank Adaptation (QLoRA)
1:08 - Setup development environment
2:06 - Create and prepare the fine-tuning dataset
4:32 - Fine-tune Gemma using TRL and the SFTTrainer
Resources:
Fine-Tune Gemma using Hugging Face Transformers and QloRA → https://goo.gle/4jVpjjj
Subscribe to Google for Developers → https://goo.gle/developers
Speaker: Philipp Schmid
Products mentioned: Gemma
- Category
- Project
- Tags
- Google, developers
Sign in or sign up to post comments.
Be the first to comment