Instructions to use BugTraceAI/BugTraceAI-CORE-Fast with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use BugTraceAI/BugTraceAI-CORE-Fast with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="BugTraceAI/BugTraceAI-CORE-Fast", filename="bugtraceai-core-fast.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use BugTraceAI/BugTraceAI-CORE-Fast with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BugTraceAI/BugTraceAI-CORE-Fast # Run inference directly in the terminal: llama-cli -hf BugTraceAI/BugTraceAI-CORE-Fast
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BugTraceAI/BugTraceAI-CORE-Fast # Run inference directly in the terminal: llama-cli -hf BugTraceAI/BugTraceAI-CORE-Fast
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf BugTraceAI/BugTraceAI-CORE-Fast # Run inference directly in the terminal: ./llama-cli -hf BugTraceAI/BugTraceAI-CORE-Fast
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf BugTraceAI/BugTraceAI-CORE-Fast # Run inference directly in the terminal: ./build/bin/llama-cli -hf BugTraceAI/BugTraceAI-CORE-Fast
Use Docker
docker model run hf.co/BugTraceAI/BugTraceAI-CORE-Fast
- LM Studio
- Jan
- Ollama
How to use BugTraceAI/BugTraceAI-CORE-Fast with Ollama:
ollama run hf.co/BugTraceAI/BugTraceAI-CORE-Fast
- Unsloth Studio new
How to use BugTraceAI/BugTraceAI-CORE-Fast with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BugTraceAI/BugTraceAI-CORE-Fast to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BugTraceAI/BugTraceAI-CORE-Fast to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for BugTraceAI/BugTraceAI-CORE-Fast to start chatting
- Pi new
How to use BugTraceAI/BugTraceAI-CORE-Fast with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BugTraceAI/BugTraceAI-CORE-Fast
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "BugTraceAI/BugTraceAI-CORE-Fast" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use BugTraceAI/BugTraceAI-CORE-Fast with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BugTraceAI/BugTraceAI-CORE-Fast
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default BugTraceAI/BugTraceAI-CORE-Fast
Run Hermes
hermes
- Docker Model Runner
How to use BugTraceAI/BugTraceAI-CORE-Fast with Docker Model Runner:
docker model run hf.co/BugTraceAI/BugTraceAI-CORE-Fast
- Lemonade
How to use BugTraceAI/BugTraceAI-CORE-Fast with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull BugTraceAI/BugTraceAI-CORE-Fast
Run and chat with the model
lemonade run user.BugTraceAI-CORE-Fast-{{QUANT_TAG}}List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)BugTraceAI-CORE-Fast (7B)
A lightweight security engineering model from BugTraceAI, tuned for fast triage, payload review, scanning support, and concise remediation guidance across agentic web pentesting workflows.
Model Overview
| Field | Value |
|---|---|
| Organization | BugTraceAI |
| Framework | BugTraceAI agentic web pentesting framework |
| Variant | BugTraceAI-CORE-Fast |
| Parameter Scale | 7B |
| Architecture | Qwen2.5 Coder |
| Intended Domain | Application security and authorized security research |
| Primary Delivery Format | GGUF |
Intended Use
- Fast classification of web findings and scanner output.
- Short-form assistance for payload debugging in authorized test environments.
- Generating concise reproduction steps, notes, and developer-facing fixes.
Out-of-Scope Use
- Unsupervised offensive use against systems without authorization.
- Claims of exploit success without external validation.
- Long-form reporting when deep context or multi-step reasoning is required.
Training Data Summary
This model was tuned for security engineering workflows using a curated mix of public, security-focused material. The training mix is described at a high level below:
- Public vulnerability writeups and disclosed security reports used to improve structure, reasoning, and reporting quality.
- Security methodology material used to improve triage, reproduction planning, and remediation-oriented analysis.
- Domain examples covering common web application security patterns, defensive controls, and scanner-style findings.
The card intentionally describes the data at a summary level. It should not be read as a guarantee of exact coverage for any individual product, CVE, target stack, or technique.
Prompting Guidance
Recommended prompting style:
- State the environment and authorization context clearly.
- Provide concrete evidence: request, response, stack details, logs, code snippets, or scan output.
- Ask for one task at a time: triage, reproduction planning, impact analysis, remediation, or reporting.
Example tasks that fit this model:
- Summarize why this finding is likely valid and what evidence is missing.
- Rewrite this scanner output into a concise engineering ticket.
- Draft remediation steps for this authorization bug or input validation issue.
Ollama Example
FROM hf.co/BugTraceAI/BugTraceAI-CORE-Fast
SYSTEM """
You are BugTraceAI-CORE-Fast, a security engineering assistant for authorized testing,
triage, and remediation support. Prefer precise technical analysis, state assumptions,
and separate confirmed evidence from hypotheses.
"""
PARAMETER temperature 0.1
PARAMETER top_p 0.9
Create the local model with:
ollama create bugtrace-fast -f Modelfile
Strengths
- Low-latency responses for automation-heavy workflows.
- Strong fit for short prompts, CLI integration, and rapid iteration.
- Useful as a first-pass model before escalating to the Pro variant.
Limitations
- More likely to miss cross-step reasoning than the Pro model.
- May require external tools to validate security claims.
- Produces best results with tightly scoped prompts and explicit context.
Evaluation Status
This release is currently documented with qualitative positioning rather than a public benchmark suite. If you rely on the model for production workflows, validate it against your own prompt set, evidence format, and report quality bar.
Safety and Responsible Use
This model is intended for authorized security work, defensive research, education, and engineering support. Users are responsible for ensuring legal authorization, validating outputs, and applying human review before acting on model-generated analysis.
License
Apache-2.0.
- Downloads last month
- 87
We're not able to determine the quantization variants.
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="BugTraceAI/BugTraceAI-CORE-Fast", filename="bugtraceai-core-fast.gguf", )