DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement
Paper
• 2305.08227 • Published
• 1
MLX-compatible weights for DeepFilterNet3, a real-time speech enhancement model that suppresses background noise from audio.
This is a direct conversion of the original PyTorch weights to safetensors format for use with MLX on Apple Silicon.
safetensors via the included convert_deepfilternet.py scriptNo fine-tuning or quantisation was applied — the weights are numerically identical to the original checkpoint.
| File | Description |
|---|---|
config.json |
Model architecture configuration |
model.safetensors |
Pre-converted weights (8.3 MB, float32) |
convert_deepfilternet.py |
Conversion script (PyTorch → MLX safetensors) |
| Parameter | Value |
|---|---|
| Sample rate | 48 kHz |
| FFT size | 960 |
| Hop size | 480 |
| ERB bands | 32 |
| DF bins | 96 |
| DF order | 5 |
| Parameters | ~2M |
import MLXAudioSTS
let model = try await DeepFilterNetModel.fromPretrained("iky1e/DeepFilterNet3-MLX")
let enhanced = try model.enhance(audioArray)
from mlx_audio.sts.models.deepfilternet import DeepFilterNetModel
model = DeepFilterNetModel.from_pretrained(version=3, model_dir="path/to/local/dir")
enhanced = model.enhance("noisy.wav")
To re-create this conversion from the original DeepFilterNet checkpoint:
python convert_deepfilternet.py \
--input /path/to/DeepFilterNet3 \
--output ./DeepFilterNet3-MLX \
--name DeepFilterNet3
The input directory should contain a config.ini and a checkpoints/ folder from the original repo.
@inproceedings{schroeter2023deepfilternet3,
title={DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement},
author={Schr{\"o}ter, Hendrik and Rosenkranz, Tobias and Escalante-B., Alberto N. and Maier, Andreas},
booktitle={INTERSPEECH},
year={2023}
}
Quantized