Meditation Agent (SmolLM3 3B) — Contemplative Teaching AI

This is the 3B branch of the Meditation Agent series, built on HuggingFaceTB/SmolLM3-3B-Base and fine-tuned with the A-LoRA V6 recipe for contemplative teaching.

All 9 teachers blended — Osho, Thich Nhat Hanh, Nisargadatta, Krishnamurti, Eckhart Tolle, Alan Watts, Atmananda, Rupert Spira, Pema Chodron. No system prompt required. Question in, teaching out.

50-question eval summary

This 3B branch was run through a raw 50-question eval after GGUF conversion.

  • Q8_0: completed 50/50 with 0 request failures and is the highest-fidelity public quant
  • Q5_K_M: completed 50/50 with 0 request failures and is the recommended default public quant
  • Q3_K_M: completed 50/50 with 0 request failures, but is weaker and more generic than Q5_K_M
  • overall read: strong stability for a 3B model, but still below the larger Meditation Agent branches in teacher-specific nuance and factual reliability

Final training setup

Setting Value
Base model HuggingFaceTB/SmolLM3-3B-Base
Method A-LoRA V6
Format Question + concept arrows in, pure teaching passage out
Data exported 24,031 atoms
V6 formatted set 17,088 examples after opener cap
Train / eval split 16,233 / 855
Adapter recipe QDoRA + rsLoRA, rank 32, alpha 32
Epochs 1
Max sequence length 1536
Completion-only loss Yes
NEFTune alpha 5

Training result

Merged checkpoint: checkpoint-2000

Checkpoint Eval loss Eval token accuracy
500 1.7580 0.5554
1000 1.6840 0.5686
1500 1.6396 0.5771
2000 1.6338 0.5781

Files

File Size Use
Meditation_Agent-SmolLM3-3B-Q8_0.gguf 3.05 GB Highest fidelity
Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf 2.06 GB Recommended default
Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf 1.46 GB Smallest, most brittle
Meditation_Agent-SmolLM3-3B-BF16.gguf 5.74 GB Archive / conversion source

Individual Teacher 3B Specialists

Each teacher also has their own dedicated 3B model — same SmolLM3-3B base, trained on single-teacher data only. Use these when you want one specific voice rather than the blended multi-teacher model.

Release recommendation

Recommended default: Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf

Use Q8_0 when you want the strongest public 3B quant, Q5_K_M as the balanced default, and Q3_K_M when size matters most. BF16 is the archive and further-conversion source.

Positioning

This is the lightweight 3B Meditation Agent:

  • much smaller than the 8B/Phi4 branches
  • capable of direct contemplative answers without prompt scaffolding
  • best suited for local inference where memory footprint matters

Related Models


ellam sivamayam — Everything is Shiva's expression.

எல்லாம் சிவமயம்

Downloads last month
75
GGUF
Model size
3B params
Architecture
smollm3
Hardware compatibility
Log In to add your hardware

3-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sathman/Meditation-Agent-SmolLM3-3B-GGUF

Adapter
(17)
this model

Collection including Sathman/Meditation-Agent-SmolLM3-3B-GGUF