V3.0.0 Released
Fine-tune SLMs in seconds
Upload your dataset, get AI-matched model recommendations, and receive a ready-to-run Google Colab notebook. Powered by Unsloth for 2x faster, 70% less memory training.
2x faster training with Unsloth
70% less GPU memory usage
No setup required
Runs on free Google Colab
18
Models
128K
Context
4
Presets
training.ipynb
# SLMGEN Generated Notebook
from unsloth import FastChatModel
import torch
model, tokenizer = FastChatModel.from_pretrained(
"Qwen/Qwen2.5-3B-Instruct",
max_seq_length=2048,
load_in_4bit=True
)
from trl import SFTTrainer
Training started...
Step150/1000
Loss: 0.84LR: 2e-4
Ready to run
V3.0.0