Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
Sam Purkis's picture
3 3 5

Sam Purkis

SamPurkis
Illia56's profile picture jonathan-roberts1's profile picture 21world's profile picture
·
  • smpurkis
  • sam-purkis-4baa6668

AI & ML interests

None yet

Recent Activity

reacted to eaddario's post with 👍 3 days ago
Experimental global target bits‑per‑weight quantization of mistralai/Ministral-3-14B-Instruct-2512 and mistralai/Ministral-3-14B-Reasoning-2512 Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target. Key Advantages: - VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM). - Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs. Full benchmarks (PPL, KLD, ARC, MMLU, etc.) and methodology in the models' cards https://huggingface.co/eaddario/Ministral-3-14B-Instruct-2512-GGUF https://huggingface.co/eaddario/Ministral-3-14B-Reasoning-2512-GGUF
liked a Space 7 days ago
Qwen/Qwen3-TTS
published a model 8 months ago
SamPurkis/Qwen3-16B-A1.5B
View all activity

Organizations

None yet

SamPurkis 's Spaces 1

Running

Supertonic 2 (TTS)

⚡

Lightning-Fast, On-Device, Multilingual TTS

Jan 6
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs