| | --- |
| | license: llama2 |
| | base_model: meta-llama/CodeLlama-7b-Python-hf |
| | model_name: CodeLlama-7b-Python GGUF |
| | model_type: llama |
| | language: |
| | - code |
| | tags: |
| | - LLM |
| | - llama2 |
| | - llama-2 |
| | - CodeLlama |
| | - CodeLlama-Python |
| | - CodeLlama-7B-Python |
| | - lama.cpp |
| | - Python |
| | - 7B |
| | --- |
| | |
| |
|
| | # Model Card: Meta CodeLlama-7b-Python gguf |
| |
|
| | Origin Meta model [CodeLlama-7b-Python](https://llama.meta.com/llama-downloads/), [code llama large language model coding](https://ai.meta.com/blog/code-llama-large-language-model-coding/), [codellama](https://github.com/meta-llama/codellama) converted into gguf format with [llama.cpp](https://github.com/ggerganov/llama.cpp) |
| |
|
| | *Licen*: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved." |
| |
|
| | [Policy](https://llama.meta.com/use-policy/) |
| |
|
| |
|
| | # Run model |
| |
|
| | ```bash |
| | ./main -m ggml-model-f32-00001-of-00010.gguf -p "def fibonacci(" |
| | ``` |
| |
|
| |
|
| | ## Convert to gguf |
| | ```bash |
| | python3 convert.py ../codellama/CodeLlama-7b-Python |
| | ``` |
| |
|
| |
|
| | ## Split Model |
| |
|
| | Original Meta `CodeLlama-7b-Python` model converted with [python3 convert.py](https://github.com/ggerganov/llama.cpp) to `gguf` and |
| | `CodeLlama-7b-Python/ggml-model-f32.gguf` and splitted with [gguf-split](https://github.com/ggerganov/llama.cpp) to smaller size chunks up to `split-max-tensors 32`. |
| |
|
| | ```bash |
| | python3 convert.py ../codellama/CodeLlama-7b-Python |
| | ./gguf-split --split --split-max-tensors 32 ./models/CodeLlama-7b-Python/ggml-model-f32.gguf ./models/CodeLlama-7b-Python/ggml-model-f32 |
| | ``` |
| |
|
| |
|
| | ## Merge-back model use |
| |
|
| | ```bash |
| | ./gguf-split --merge ggml-model-f32-00001-of-00010.gguf ggml-model-f32.gguf |
| | ``` |
| |
|