Instructions to use GroNLP/bert_dutch_base_abusive_language with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GroNLP/bert_dutch_base_abusive_language with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="GroNLP/bert_dutch_base_abusive_language")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert_dutch_base_abusive_language") model = AutoModelForSequenceClassification.from_pretrained("GroNLP/bert_dutch_base_abusive_language") - Notebooks
- Google Colab
- Kaggle
Fine-tuned model for detecting instances of abusive language in Ducth tweets. The model has been trained with DALC v2.0 .
Abusive language is defined as "Impolite, harsh, or hurtful language (that may contain profanities or vulgar language) that result in a debasement, harassment, threat, or aggression of an individual or a (social) group, but not necessarily of an entity, an institution, an organisations, or a concept." (Ruitenbeek et al., 2022)
The model achieve the following results on multiple test data:
- DALC held-out test set: macro F1: 72.23; F1 Abusive: 51.60
- HateCheck-NL (functional benchmark for hate speech): Accuracy: 60.19; Accuracy non-hateful tests: 57.38 ; Accuracy hateful tests: 59.58
- OP-NL (dynamyc benchmark for offensive language): macro F1: 57.57
More details on the training settings and pre-processind are available here
- Downloads last month
- 11