Papers
arxiv:2603.03508

Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi

Published on Mar 3
Authors:
,
,
,

Abstract

LilMoo is a 0.6-billion-parameter Hindi language model trained from scratch using a transparent pipeline and high-quality corpus, achieving performance comparable to larger multilingual models.

AI-generated summary

The dominance of large multilingual foundation models has widened linguistic inequalities in Natural Language Processing (NLP), often leaving low-resource languages underrepresented. This paper introduces LilMoo, a 0.6-billion-parameter Hindi language model trained entirely from scratch to address this gap. Unlike prior Hindi models that rely on continual pretraining from opaque multilingual foundations, LilMoo is developed through a fully transparent and reproducible pipeline optimized for limited compute environments. We construct a high-quality Hindi corpus (GigaLekh) filtered through both heuristic and learned (LLM-as-a-judge) methods, complemented by bilingual augmentation with curated English data. Using this dataset, we explore various training recipes for small-scale language models. Across comprehensive evaluation suites, LilMoo consistently outperforms comparably sized multilingual baselines such as Qwen2.5-0.5B and Qwen3-0.6B, demonstrating that well-designed language-specific pretraining can rival large multilingual models at the sub-billion-parameter range.

Community

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 4

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.03508 in a Space README.md to link it from this page.

Collections including this paper 1