Qwen3-ASR-0.6B-GGUF
This model is converted from Qwen/Qwen3-ASR-0.6B to GGUF using convert_hf_to_gguf.py
To use it:
llama-server -hf ggml-org/Qwen3-ASR-0.6B-GGUF
- Downloads last month
- 6,676
Hardware compatibility
Log In to add your hardware
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for ggml-org/Qwen3-ASR-0.6B-GGUF
Base model
Qwen/Qwen3-ASR-0.6B