Instructions to use defog/sqlcoder-70b-alpha with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use defog/sqlcoder-70b-alpha with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="defog/sqlcoder-70b-alpha")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("defog/sqlcoder-70b-alpha") model = AutoModelForCausalLM.from_pretrained("defog/sqlcoder-70b-alpha") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use defog/sqlcoder-70b-alpha with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "defog/sqlcoder-70b-alpha" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "defog/sqlcoder-70b-alpha", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/defog/sqlcoder-70b-alpha
- SGLang
How to use defog/sqlcoder-70b-alpha with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "defog/sqlcoder-70b-alpha" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "defog/sqlcoder-70b-alpha", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "defog/sqlcoder-70b-alpha" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "defog/sqlcoder-70b-alpha", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use defog/sqlcoder-70b-alpha with Docker Model Runner:
docker model run hf.co/defog/sqlcoder-70b-alpha
Sqlcoder-70b automatically translates Chinese fields into English in the answer
prompt:
Task
Generate a SQL query to answer [QUESTION]黄铁伦的工作地点在哪[/QUESTION]
Instructions
If you cannot answer the question with the available database schema, return 'I do not know'
Database Schema
The query will run on a database with the following schema:
CREATE TABLE IF NOT EXISTS Sheet1 (
序号 TEXT,
审核领导 TEXT,
OA编号 TEXT,
年月 TEXT,
工号 TEXT,
姓名 TEXT,
部门 TEXT,
项目 TEXT,
岗位 TEXT,
岗位类别 TEXT,
招聘地 TEXT,
工作地 TEXT,
到职日期 TEXT,
学历 TEXT,
迟到次数 TEXT,
旷工天数 TEXT,
事假天数 TEXT,
病假天数 TEXT,
产假天数 TEXT,
丧假天数 TEXT,
折假天数 TEXT,
休年假天数 TEXT,
婚假天数 TEXT,
节假日加班天数 TEXT,
缺勤天数 TEXT,
考勤备注 TEXT,
部门编号 TEXT
);
Answer
Given the database schema, here is the SQL query that answers [QUESTION]黄铁伦的工作地点在哪[/QUESTION]
[SQL]
if the schema is too long and is more than the content max length, what is the solution ?
