🤗 LLM-Drop hosts research artifacts for efficient foundation models, with a focus on large language models and unified multimodal models.
Our work studies how modern foundation models can be made more efficient while preserving their core capabilities. This page collects model weights, code links, project pages, and related resources from our research projects.
Uncovering the Redundancy in Transformers via a Unified Study of Layer Dropping
TMLR 2026
Demystifying When Pruning Works via Representation Hierarchies
Understanding and Harnessing Sparsity in Unified Multimodal Models
For questions or collaborations, please contact: