| --- |
| library_name: peft |
| tags: |
| - code |
| - instruct |
| - code-llama |
| datasets: |
| - cognitivecomputations/dolphin-coder |
| base_model: codellama/CodeLlama-7b-hf |
| license: apache-2.0 |
| --- |
| |
| ### Finetuning Overview: |
|
|
| **Model Used:** codellama/CodeLlama-7b-hf |
|
|
| **Dataset:** cognitivecomputations/dolphin-coder |
|
|
| #### Dataset Insights: |
|
|
| [Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks. |
|
|
| #### Finetuning Details: |
|
|
| With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning: |
|
|
| - Was achieved with great cost-effectiveness. |
| - Completed in a total duration of 15hr 31mins for 1 epochs using an A6000 48GB GPU. |
| - Costed `$31.31` for the entire 1 epoch. |
|
|
| #### Hyperparameters & Additional Details: |
|
|
| - **Epochs:** 1 |
| - **Total Finetuning Cost:** $31.31 |
| - **Model Path:** codellama/CodeLlama-7b-hf |
| - **Learning Rate:** 0.0002 |
| - **Data Split:** 100% train |
| - **Gradient Accumulation Steps:** 128 |
| - **lora r:** 32 |
| - **lora alpha:** 64 |
|
|
|  |
|
|
| --- |
| license: apache-2.0 |