Text Generation
Transformers
PyTorch
Safetensors
gpt_bigcode
code
Eval Results (legacy)
text-generation-inference
Instructions to use bigcode/starcoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigcode/starcoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigcode/starcoder")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder") model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigcode/starcoder with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigcode/starcoder" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/starcoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigcode/starcoder
- SGLang
How to use bigcode/starcoder with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigcode/starcoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/starcoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigcode/starcoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/starcoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigcode/starcoder with Docker Model Runner:
docker model run hf.co/bigcode/starcoder
Install & run bigcode/starcoder easily using llmpm
#128 opened 2 months ago
by
sarthak-saxena
my changes
#127 opened 6 months ago
by
mrabriq-786
Can I have access? 🙏
#126 opened 11 months ago
by
bamkeelo
Request:Access
4
#125 opened 12 months ago
by
houssem2001115
Using the source code to fine tune another model
1
#124 opened about 1 year ago
by
ssk77
Update README.md
#123 opened about 1 year ago
by
lucas2016
Update README.md
#122 opened about 1 year ago
by
lucas2016
Request: DOI
#121 opened about 1 year ago
by
Achyuth69
Request: DOI
#120 opened over 1 year ago
by
Ad123321
Request: DOI
#119 opened over 1 year ago
by
Ad123321
Request: DOI
#118 opened over 1 year ago
by
kavinjo
🚩 Report: Legal issue(s)
#116 opened over 1 year ago
by
arborelia
Use starcoder on custom dataset for problem solving in multiple programming languages
#115 opened almost 2 years ago
by
pravallika01
Can this model be used to software fault localization?
#114 opened about 2 years ago
by
xd592319702
Fine-tuning without prompt-response data
#113 opened about 2 years ago
by
rajlohith2
Questions regarding Stack v2 and StarCoder v2
#111 opened about 2 years ago
by
aditya2211
KeyError: 'starcoder2' Need help
1
#109 opened about 2 years ago
by
CodeHima
update readme
#108 opened about 2 years ago
by
ZennyKenny
Adding `safetensors` variant of this model
#107 opened about 2 years ago
by
lugi0
safetensors version
#106 opened about 2 years ago
by
danielkorat
Inconsistent Results Between HF Inference API and StarCoder Playground for Code Completion Task
#105 opened about 2 years ago
by
FWeindel
IntelliJ IDEA 2023.3.2 ultimate starcoder 422
1
#103 opened over 2 years ago
by
pauloberezini
Prompt to generate SQL?
➕ 1
1
#102 opened over 2 years ago
by
nheidloff
PyCharm Issue: Error 422 with StarCoder Plugin Enabled
9
#100 opened over 2 years ago
by
abhikr231
Is the inference API available?
#99 opened over 2 years ago
by
ratchainant
Finetuning Starcoder with languages that are not present in The Stack
1
#98 opened over 2 years ago
by
lazarantal
How do I run the humaneval test set using starcoder? Has anyone tried it?
1
#97 opened over 2 years ago
by
LiuWhite
how to add Evaluation results in model card?
1
#96 opened over 2 years ago
by
yangrong20230920
Stopping criteria
3
#94 opened over 2 years ago
by
AnudeepPeela
How to parallelize starcoder inference?
1
#93 opened over 2 years ago
by
Cubby9059
text generation inference not working for starcoder model
#91 opened over 2 years ago
by
avirajsingh
recommendedparameters for the ConstantLengthDataset
#89 opened over 2 years ago
by
rachelshalom
Remove input id tokens
#87 opened over 2 years ago
by
satsat
"fix bugpackage" when i use starcode generate code find a bug
#85 opened over 2 years ago
by
leojames
Can you run starcoder in a Mac book pro machine?
3
#83 opened over 2 years ago
by
neo-benjamin
FineTuning on SageMaker
2
#82 opened over 2 years ago
by
dshah3
ASP.net core support
#81 opened almost 3 years ago
by
moshere
Model access error during training on SageMaker
1
#80 opened almost 3 years ago
by
chai007
using my own code for code generation
👍 3
5
#79 opened almost 3 years ago
by
muntahabintealam
How to save and load the Peft/LoRA Finetune
👍 4
3
#78 opened almost 3 years ago
by
LazerJesus
MBPP evolution
3
#77 opened almost 3 years ago
by
Edisoncccc
Why isn’t the memory being released after inference?
1
#76 opened almost 3 years ago
by
CodeWave
Code conversion using Starcoder
➕ 1
#75 opened almost 3 years ago
by
sionsmith
Differences in Response Accuracy and Speed between FP32, 16, 8?
#73 opened almost 3 years ago
by
elligottmc
M1 Error while using device = "mps"
👍 3
2
#72 opened almost 3 years ago
by
moshere
Error : in _sync_params_and_buffers dist._broadcast_coalesced( RuntimeError: Invalid scalar type
3
#71 opened almost 3 years ago
by
mahi22muki
Default parameters
#70 opened almost 3 years ago
by
einsteiner1983
[License] Is models based on StarCoder requested to open source too? Like GPT?
#69 opened almost 3 years ago
by
Bilibili
valueerror: error initializing torch.distributed using env:// rendezvous: environment variable master_addr expected, but not set
1
#68 opened almost 3 years ago
by
mahi22muki
Tokenizer causes issues in Finetuning because of special tokens in tokenization <|X|>
#67 opened almost 3 years ago
by
LazerJesus