Instructions to use BugTraceAI/BugTraceAI-CORE-Pro with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use BugTraceAI/BugTraceAI-CORE-Pro with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="BugTraceAI/BugTraceAI-CORE-Pro", filename="bugtraceai-core-pro.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use BugTraceAI/BugTraceAI-CORE-Pro with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BugTraceAI/BugTraceAI-CORE-Pro # Run inference directly in the terminal: llama-cli -hf BugTraceAI/BugTraceAI-CORE-Pro
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BugTraceAI/BugTraceAI-CORE-Pro # Run inference directly in the terminal: llama-cli -hf BugTraceAI/BugTraceAI-CORE-Pro
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf BugTraceAI/BugTraceAI-CORE-Pro # Run inference directly in the terminal: ./llama-cli -hf BugTraceAI/BugTraceAI-CORE-Pro
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf BugTraceAI/BugTraceAI-CORE-Pro # Run inference directly in the terminal: ./build/bin/llama-cli -hf BugTraceAI/BugTraceAI-CORE-Pro
Use Docker
docker model run hf.co/BugTraceAI/BugTraceAI-CORE-Pro
- LM Studio
- Jan
- Ollama
How to use BugTraceAI/BugTraceAI-CORE-Pro with Ollama:
ollama run hf.co/BugTraceAI/BugTraceAI-CORE-Pro
- Unsloth Studio new
How to use BugTraceAI/BugTraceAI-CORE-Pro with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BugTraceAI/BugTraceAI-CORE-Pro to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BugTraceAI/BugTraceAI-CORE-Pro to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for BugTraceAI/BugTraceAI-CORE-Pro to start chatting
- Pi new
How to use BugTraceAI/BugTraceAI-CORE-Pro with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BugTraceAI/BugTraceAI-CORE-Pro
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "BugTraceAI/BugTraceAI-CORE-Pro" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use BugTraceAI/BugTraceAI-CORE-Pro with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BugTraceAI/BugTraceAI-CORE-Pro
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default BugTraceAI/BugTraceAI-CORE-Pro
Run Hermes
hermes
- Docker Model Runner
How to use BugTraceAI/BugTraceAI-CORE-Pro with Docker Model Runner:
docker model run hf.co/BugTraceAI/BugTraceAI-CORE-Pro
- Lemonade
How to use BugTraceAI/BugTraceAI-CORE-Pro with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull BugTraceAI/BugTraceAI-CORE-Pro
Run and chat with the model
lemonade run user.BugTraceAI-CORE-Pro-{{QUANT_TAG}}List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)BugTraceAI-CORE-Pro (12B)
A higher-capacity security engineering model from BugTraceAI, tuned for deeper analysis, professional reporting, exploit-chain review, and long-context investigation across agentic web pentesting workflows.
Model Overview
| Field | Value |
|---|---|
| Organization | BugTraceAI |
| Framework | BugTraceAI agentic web pentesting framework |
| Variant | BugTraceAI-CORE-Pro |
| Parameter Scale | 12B |
| Architecture | Mistral Nemo |
| Intended Domain | Application security and authorized security research |
| Primary Delivery Format | GGUF |
Intended Use
- End-to-end analysis of web application findings in authorized environments.
- Drafting professional vulnerability reports and remediation guidance.
- Reasoning over larger technical contexts such as logs, source code, and findings bundles.
Out-of-Scope Use
- Autonomous offensive operation against unauthorized targets.
- Replacing human validation for severity, exploitability, or business impact.
- Guaranteeing exploit reliability across target-specific environments.
Training Data Summary
This model was tuned for security engineering workflows using a curated mix of public, security-focused material. The training mix is described at a high level below:
- Public vulnerability writeups and disclosed security reports used to improve structure, reasoning, and reporting quality.
- Security methodology material used to improve triage, reproduction planning, and remediation-oriented analysis.
- Domain examples covering common web application security patterns, defensive controls, and scanner-style findings.
The card intentionally describes the data at a summary level. It should not be read as a guarantee of exact coverage for any individual product, CVE, target stack, or technique.
Prompting Guidance
Recommended prompting style:
- State the environment and authorization context clearly.
- Provide concrete evidence: request, response, stack details, logs, code snippets, or scan output.
- Ask for one task at a time: triage, reproduction planning, impact analysis, remediation, or reporting.
Example tasks that fit this model:
- Summarize why this finding is likely valid and what evidence is missing.
- Rewrite this scanner output into a concise engineering ticket.
- Draft remediation steps for this authorization bug or input validation issue.
Ollama Example
FROM hf.co/BugTraceAI/BugTraceAI-CORE-Pro
SYSTEM """
You are BugTraceAI-CORE-Pro, a security engineering assistant for authorized testing,
triage, and remediation support. Prefer precise technical analysis, state assumptions,
and separate confirmed evidence from hypotheses.
"""
PARAMETER temperature 0.1
PARAMETER top_p 0.9
Create the local model with:
ollama create bugtrace-pro -f Modelfile
Strengths
- Better long-context reasoning and report quality than the Fast variant.
- More suitable for multi-step analysis and vulnerability writeups.
- Stronger at connecting findings, evidence, and remediation paths.
Limitations
- Higher latency and resource requirements than the Fast model.
- Still requires human review for high-risk decisions and disclosure quality.
- Performance depends on prompt quality and the evidence provided.
Evaluation Status
This release is currently documented with qualitative positioning rather than a public benchmark suite. If you rely on the model for production workflows, validate it against your own prompt set, evidence format, and report quality bar.
Safety and Responsible Use
This model is intended for authorized security work, defensive research, education, and engineering support. Users are responsible for ensuring legal authorization, validating outputs, and applying human review before acting on model-generated analysis.
License
Apache-2.0.
- Downloads last month
- 167
We're not able to determine the quantization variants.
Model tree for BugTraceAI/BugTraceAI-CORE-Pro
Base model
unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="BugTraceAI/BugTraceAI-CORE-Pro", filename="bugtraceai-core-pro.gguf", )