Abstract
Layer pruning compresses large language models while maintaining classification performance but causes significant degradation in generative reasoning tasks, with limited recovery possible through supervised finetuning on self-generated responses.
Recent works have shown that layer pruning can compress large language models (LLMs) while retaining strong performance on classification benchmarks with little or no finetuning. However, existing pruning techniques often suffer severe degradation on generative reasoning tasks. Through a systematic study across multiple model families, we find that tasks requiring multi-step reasoning are particularly sensitive to depth reduction. Beyond surface-level text degeneration, we observe degradation of critical algorithmic capabilities, including arithmetic computation for mathematical reasoning and balanced parenthesis generation for code synthesis. Under realistic post-training constraints, without access to pretraining-scale data or compute, we evaluate a simple mitigation strategy based on supervised finetuning with Self-Generated Responses. This approach achieves strong recovery on classification tasks, retaining up to 90\% of baseline performance, and yields substantial gains of up to 20--30 percentage points on generative benchmarks compared to prior post-pruning techniques. Crucially, despite these gains, recovery for generative reasoning remains fundamentally limited relative to classification tasks and is viable primarily at lower pruning ratios. Overall, we characterize the practical limits of layer pruning for generative reasoning and provide guidance on when depth reduction can be applied effectively under constrained post-training regimes.
Community
TLDR; Layer pruning compresses LLMs with little impact on classification, but severely degrades generative reasoning by disrupting core algorithmic abilities like arithmetic and syntax. Self-generated supervision improves recovery, yet explains clear practical limits to depth reduction under realistic training constraints.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- From LLMs to LRMs: Rethinking Pruning for Reasoning-Centric Models (2026)
- Iterative Structured Pruning for Large Language Models with Multi-Domain Calibration (2026)
- Leveraging KV Similarity for Online Structured Pruning in LLMs (2025)
- Do LLMs Encode Functional Importance of Reasoning Tokens? (2026)
- How Does Prefix Matter in Reasoning Model Tuning? (2026)
- POP: Prefill-Only Pruning for Efficient Large Model Inference (2026)
- Beyond Output Critique: Self-Correction via Task Distillation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 4
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper