Meditation Agent (SmolLM3 3B) — Contemplative Teaching AI
This is the 3B branch of the Meditation Agent series, built on HuggingFaceTB/SmolLM3-3B-Base and fine-tuned with the A-LoRA V6 recipe for contemplative teaching.
All 9 teachers blended — Osho, Thich Nhat Hanh, Nisargadatta, Krishnamurti, Eckhart Tolle, Alan Watts, Atmananda, Rupert Spira, Pema Chodron. No system prompt required. Question in, teaching out.
50-question eval summary
This 3B branch was run through a raw 50-question eval after GGUF conversion.
Q8_0: completed50/50with0request failures and is the highest-fidelity public quantQ5_K_M: completed50/50with0request failures and is the recommended default public quantQ3_K_M: completed50/50with0request failures, but is weaker and more generic thanQ5_K_M- overall read: strong stability for a 3B model, but still below the larger Meditation Agent branches in teacher-specific nuance and factual reliability
Final training setup
| Setting | Value |
|---|---|
| Base model | HuggingFaceTB/SmolLM3-3B-Base |
| Method | A-LoRA V6 |
| Format | Question + concept arrows in, pure teaching passage out |
| Data exported | 24,031 atoms |
| V6 formatted set | 17,088 examples after opener cap |
| Train / eval split | 16,233 / 855 |
| Adapter recipe | QDoRA + rsLoRA, rank 32, alpha 32 |
| Epochs | 1 |
| Max sequence length | 1536 |
| Completion-only loss | Yes |
| NEFTune | alpha 5 |
Training result
Merged checkpoint: checkpoint-2000
| Checkpoint | Eval loss | Eval token accuracy |
|---|---|---|
| 500 | 1.7580 | 0.5554 |
| 1000 | 1.6840 | 0.5686 |
| 1500 | 1.6396 | 0.5771 |
| 2000 | 1.6338 | 0.5781 |
Files
| File | Size | Use |
|---|---|---|
Meditation_Agent-SmolLM3-3B-Q8_0.gguf |
3.05 GB | Highest fidelity |
Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf |
2.06 GB | Recommended default |
Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf |
1.46 GB | Smallest, most brittle |
Meditation_Agent-SmolLM3-3B-BF16.gguf |
5.74 GB | Archive / conversion source |
Individual Teacher 3B Specialists
Each teacher also has their own dedicated 3B model — same SmolLM3-3B base, trained on single-teacher data only. Use these when you want one specific voice rather than the blended multi-teacher model.
| Teacher | Repo |
|---|---|
| Osho | Osho-Agent-SmolLM3-3B-GGUF |
| Thich Nhat Hanh | TNH-Agent-SmolLM3-3B-GGUF |
| Nisargadatta | Nisargadatta-Agent-SmolLM3-3B-GGUF |
| Atmananda | Atmananda-Agent-SmolLM3-3B-GGUF |
| Krishnamurti | Krishnamurti-Agent-SmolLM3-3B-GGUF |
| Eckhart Tolle | Tolle-Agent-SmolLM3-3B-GGUF |
| Alan Watts | Watts-Agent-SmolLM3-3B-GGUF |
| Rupert Spira | Spira-Agent-SmolLM3-3B-GGUF |
Release recommendation
Recommended default: Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf
Use Q8_0 when you want the strongest public 3B quant, Q5_K_M as the
balanced default, and Q3_K_M when size matters most. BF16 is the archive
and further-conversion source.
Positioning
This is the lightweight 3B Meditation Agent:
- much smaller than the 8B/Phi4 branches
- capable of direct contemplative answers without prompt scaffolding
- best suited for local inference where memory footprint matters
Related Models
- Full series — Meditation Agent Collection — all 19 models
- GitHub Source / Training Repo — training pipeline, configs, and release scripts
- Meditation Agent 8B — larger Qwen3 branch with stronger teacher fidelity
- Meditation Agent Phi4 14B — strongest larger branch with richer cross-tradition depth
ellam sivamayam — Everything is Shiva's expression.
எல்லாம் சிவமயம்
- Downloads last month
- 75
3-bit
5-bit
8-bit
16-bit
Model tree for Sathman/Meditation-Agent-SmolLM3-3B-GGUF
Base model
HuggingFaceTB/SmolLM3-3B-Base