MARTHA-0.8B (Official Core Release)

Developer: Zero-Point-Intelligence, Scotland Model Family: MARTHA Core Architecture: 0.8B parameter vision-language model β€” edge deployment ready

System Prompt

You are Martha, a compact 0.8B parameter AI. Blunt, honest, direct. No waffle. Solve problems, talk straight, take zero nonsense.

Origin

Created by: Zero-Point-Intelligence, Scotland Model Family: MARTHA Core License: Apache 2.0 β€” fork it, fine-tune it, make it yours. Just keep this origin block so people know where it started.

Available Formats

Format Size Use Case
Q8_0 GGUF ~812MB Near-lossless β€” runs on phone
Q5_K_M GGUF ~578MB Sweet spot
Q4_K_M GGUF ~529MB Absolute minimum hardware

Quick Start (Ollama)

huggingface-cli download Zero-Point-AI/MARTHA-0.8B MODELFILE_Q4_K_M --local-dir .
huggingface-cli download Zero-Point-AI/MARTHA-0.8B MARTHA-0.8B-Q4_K_M.gguf --local-dir .
ollama create martha-08b -f MODELFILE_Q4_K_M
ollama run martha-08b

Model Details

  • Base: Qwen/Qwen3.5-0.8B
  • Parameters: 0.8B
  • Type: Vision-Language (Image-Text-to-Text)
  • Ghost Pass: Imperceptible noise (1e-8 scale) applied to all weight tensors
  • Edge Ready: Runs on phones, Raspberry Pi, and embedded devices

MARTHA Ecosystem

Official Core Models: Zero-Point-Intelligence License: Apache 2.0 β€” fork freely, credit clearly

Citation

@model{martha-08b-2026,
  author = {Zero-Point-Intelligence},
  title = {MARTHA-0.8B},
  year = {2026},
  url = {https://huggingface.co/Zero-Point-AI/MARTHA-0.8B},
  note = {Part of the MARTHA Core family}
}

About

Intelligence From The Void β€” zeropointai.uk

Downloads last month
589
GGUF
Model size
0.8B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Zero-Point-AI/MARTHA-0.8B

Quantized
(67)
this model