V3.0.0 Released

Fine-tune SLMs in seconds

Upload your dataset, get AI-matched model recommendations, and receive a ready-to-run Google Colab notebook. Powered by Unsloth for 2x faster, 70% less memory training.

2x faster training with Unsloth
70% less GPU memory usage
No setup required
Runs on free Google Colab
18
Models
128K
Context
4
Presets
training.ipynb
# SLMGEN Generated Notebook
from unsloth import FastChatModel
import torch
model, tokenizer = FastChatModel.from_pretrained(
"Qwen/Qwen2.5-3B-Instruct",
max_seq_length=2048,
load_in_4bit=True
)
from trl import SFTTrainer
Training started...
Step150/1000
Loss: 0.84LR: 2e-4
Ready to run
V3.0.0

What's New in V3.0.0

More models, more formats, more flexibility

Dataset Converter

CSV, TSV, JSON, Alpaca, ShareGPT → ChatML

Training Presets

Quick Demo, Production, Edge Optimize

Export Pipeline

Ollama, GGUF, vLLM, HuggingFace

18 Models

Up to 84B params, 128K context

Qwen 3.532B
Llama 3.38B
DeepSeek V384B
Mistral Small 324B
Gemma 34B
SmolLM33B
18
SLM Models
Latest 2026 models
128K
Max Context
Token context window
4
Presets
Quick to Production
4
Export Formats
Ollama, GGUF, vLLM, HF

How It Works

Four simple steps to your fine-tuned model

01

Upload

Drop your dataset or convert from CSV/JSON

02

Analyze

Auto-detect quality, format, characteristics

03

Match

AI scores 18 models for your task & data

04

Generate

Get ready-to-run Colab notebook

Ready to fine-tune?

Start for free. No setup required. Runs on Google Colab with free GPU access.

Start Fine-Tuning Now