This is a slop-reduced version of mistralai/Mistral-Nemo-Instruct-2407, made using a development version of Heretic (Git commit 1cfd09d7f3a4d50793d5c3948a6c74aac108f182)

This version furthermore has MPOA applied using Grim Jim's tool. It was manually calibrated to minimize the refusal rate. KL divergence wasn't measured. So this is an uncensored, no reduced slop version of Mistral Nemo.


LoRA

This is a LoRA extracted from a language model. It was extracted using mergekit.

LoRA Details

This LoRA adapter was extracted from EldritchLabs/Mistral-Nemo-Instruct-2407-heretic-noslop-MPOA and uses mistralai/Mistral-Nemo-Instruct-2407 as a base.

Parameters

The following command was used to extract this LoRA adapter:

mergekit-extract-lora --model B:\12B\!models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop\MPOA_pew --base-model B:\12B\!models--mistralai--Mistral-Nemo-Instruct-2407 --out-path B:\12B\!models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop\LoRA --max-rank=64 --cuda
Downloads last month
36
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EldritchLabs/Mistral-Nemo-Instruct-2407-heretic-noslop-MPOA

Finetuned
(2)
this model
Quantizations
3 models

Collection including EldritchLabs/Mistral-Nemo-Instruct-2407-heretic-noslop-MPOA