CodeHima/app350_llama_format
Viewer • Updated • 15.5k • 38
How to use CodeHima/Llama_TOS with Adapters:
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("undefined")
model.load_adapter("CodeHima/Llama_TOS", set_active=True)This model is a fine-tuned version of the Llama 3.2 1B model, specifically trained to analyze privacy policies and terms of service. It can determine if clauses are fair or unfair and identify specific privacy practices mentioned in the text.
This model is designed for:
The model was fine-tuned on the CodeHima/app350_llama_format dataset, which contains annotated conversations about privacy policy clauses. The fine-tuning process used the following parameters:
You can use this model to analyze privacy policy clauses or terms of service. Here's an example of how to use it:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CodeHima/Llama_TOS")
model = AutoModelForCausalLM.from_pretrained("CodeHima/Llama_TOS")
prompt = "Analyze this privacy policy clause: 'We collect your email address for marketing purposes.'"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
Base model
meta-llama/Llama-3.2-1B-Instruct