tiny-audio-embedded

This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2044

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 2000
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.9934 0.0153 1000 0.3840
0.9974 0.0306 2000 0.4156
1.0350 0.0459 3000 0.3944
0.9922 0.0612 4000 0.3625
1.0129 0.0765 5000 0.3386
0.8650 0.0918 6000 0.3348
0.9696 0.1071 7000 0.3241
0.9879 0.1224 8000 0.3174
0.9225 0.1377 9000 0.3154
0.8560 0.1530 10000 0.3139
0.8554 0.1683 11000 0.3062
0.9126 0.1836 12000 0.3000
0.9142 0.1989 13000 0.2994
0.8358 0.2142 14000 0.2943
0.8452 0.2295 15000 0.2916
0.8372 0.2449 16000 0.2822
0.8776 0.2602 17000 0.2783
0.8697 0.2755 18000 0.2809
0.8541 0.2908 19000 0.2765
0.8511 0.3061 20000 0.2728
0.8440 0.3214 21000 0.2739
0.7897 0.3367 22000 0.2648
0.8196 0.3520 23000 0.2608
0.8320 0.3673 24000 0.2614
0.8043 0.3826 25000 0.2636
0.7875 0.3979 26000 0.2551
0.8257 0.4132 27000 0.2501
0.7276 0.4285 28000 0.2519
0.8196 0.4438 29000 0.2482
0.7727 0.4591 30000 0.2497
0.8316 0.4744 31000 0.2467
0.7738 0.4897 32000 0.2404
0.8146 0.5050 33000 0.2410
0.7571 0.5203 34000 0.2370
0.7921 0.5356 35000 0.2344
0.7792 0.5509 36000 0.2319
0.7014 0.5662 37000 0.2322
0.7425 0.5815 38000 0.2281
0.7644 0.5968 39000 0.2265
0.7048 0.6121 40000 0.2251
0.6970 0.6274 41000 0.2229
0.7856 0.6427 42000 0.2214
0.7114 0.6580 43000 0.2194
0.7751 0.6733 44000 0.2183
0.6482 0.6886 45000 0.2169
0.6889 0.7040 46000 0.2154
0.7554 0.7193 47000 0.2147
0.7050 0.7346 48000 0.2124
0.7927 0.7499 49000 0.2118
0.7309 0.7652 50000 0.2108
0.7264 0.7805 51000 0.2108
0.7256 0.7958 52000 0.2087
0.7605 0.8111 53000 0.2078
0.7391 0.8264 54000 0.2082
0.6781 0.8417 55000 0.2065
0.7206 0.8570 56000 0.2060
0.7342 0.8723 57000 0.2051
0.7519 0.8876 58000 0.2055
0.7258 0.9029 59000 0.2051
0.7932 0.9182 60000 0.2047
0.7391 0.9335 61000 0.2047
0.7416 0.9488 62000 0.2046
0.7249 0.9641 63000 0.2045
0.7000 0.9794 64000 0.2044
0.6958 0.9947 65000 0.2044
0.6692 1.0 65346 0.2044

Framework versions

  • Transformers 5.7.0
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.22.2
Downloads last month
1,202
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support