Instructions to use diegoakel/llama3.2-1B-PythonInstruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use diegoakel/llama3.2-1B-PythonInstruct with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("diegoakel/llama3.2-1B-PythonInstruct", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use diegoakel/llama3.2-1B-PythonInstruct with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for diegoakel/llama3.2-1B-PythonInstruct to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for diegoakel/llama3.2-1B-PythonInstruct to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for diegoakel/llama3.2-1B-PythonInstruct to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="diegoakel/llama3.2-1B-PythonInstruct", max_seq_length=2048, )
Uploaded model
- Developed by: diegoakel
- License: apache-2.0
- Finetuned from model : unsloth/Llama-3.2-1B-bnb-4bit
The notebook to train the model is available here. it is the Llama 3.2 1B base model (the unsloth Version) finetuned to write Python code with the iamtarun/python_code_instructions_18k_alpaca dataset.
I wrote about the process on my blog, here.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for diegoakel/llama3.2-1B-PythonInstruct
Base model
meta-llama/Llama-3.2-1B Quantized
unsloth/Llama-3.2-1B-bnb-4bit
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("diegoakel/llama3.2-1B-PythonInstruct", dtype="auto")