This is a repo for experimental GGUFs for the backend agnostic implementation of the Kimi-Linear model support that requires a llama.cpp from this repo. You can git clone it and compile locally.
git clone https://github.com/ymcki/llama.cpp --branch Kimi-Linear
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j 6
./build/bin/llama-cli -m ~/Kimi-Linear-48B-A3B-Instruct-GGUF/Kimi-Linear-48B-A3B-Instruct.Q4_K_M.gguf -c 8192 -cmoe -ngl 100 --mmap
I am going to only make ggufs without imatrix and ggufs with an imatrix based on c4_en_ja_imatrix.txt for better Japanese performance as bartowski and unsloth will make ggufs with English imatrix anyway.
Base perplexity for f16 gguf is 7.291970 ± 0.048577.
Seems like MLA KV cache can only be run at F16 probably due to itself being a kind of compression. You can use this table to see how much context you can run with a single 24GB card.
| Quant Type | imatrix | File Size | Delta Perplexity | KL Divergence | Description |
|---|---|---|---|---|---|
| Q4_K_M | c4_en_ja_imatrix.txt | 29.70GB | 7.147482 ± 0.047851 | 0.081894 ± 0.001521 | Good |
| Q4_K_M | None | 29.70GB | 7.172188 ± 0.048107 | 0.083700 ± 0.00152 | Good. Slightly worse than imatrix |
| MXFP4_MOE | c4_en_ja_imatrix.txt | 27.21GB | 7.179840 ± 0.047966 | 0.088789 ± 0.001544 | Good |
| MXFP4_MOE | None | 27.21GB | 7.179840 ± 0.047966 | 0.088789 ± 0.001544 | Good. Same as the imatrix version |
| IQ3_M | c4_en_ja_imatrix.txt | 21.55GB | 7.368516 ± 0.048425 | 0.113435 ± 0.001457 | Quite Good. Can run 96k context on a single 24GB card. |
| IQ3_XS | c4_en_ja_imatrix.txt | 20.17GB | 7.534649 ± 0.049461 | 0.129645 ± 0.001448 | Quite Good. Can run 240k context on a single 24GB card. |
| IQ2_M | c4_en_ja_imatrix.txt | 16.13GB | 8.207663 ± 0.054957 | 0.224437 ± 0.001536 | Slightly batter than Q2_K but you can run 464k context on a single 24GB card. |
| Q2_K | c4_en_ja_imatrix.txt | 18.03GB | 8.295144 ± 0.057566 | 0.221437 ± 0.001617 | So-so but you can run 288k context on a single 24GB card. |
| Q2_K | None | 18.03GB | 8.648201 ± 0.059234 | 0.267082 ± 0.001659 | Worse than imatrix |
As expected, imatrix has no effect on MXFP4_MOE. From this reddit thread, its perplexity is about the same as IQ4_XS but about 6% bigger file size. Since it doesn't support imatrix, it probably only makes sense if you are using 50x0 cards that has FP4 support but even that is a question mark.
- Downloads last month
- 5,676
2-bit
3-bit
4-bit
Model tree for ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF
Base model
moonshotai/Kimi-Linear-48B-A3B-Instruct