AdapterHub
  •   Explore
  •   Docs
  •   Blog
  •  
  •  
  Feed

Blog

Adapters for Any Transformer On the HuggingFace Hub   2025-05-21     The AdapterHub Team

  2025-05-21     The AdapterHub Team

The latest release of Adapters v1.2.0 introduces a new adapter plugin interface that enables adding adapter functionality to nearly any Transformer model. We go through the details of working with this interface and various additional novelties of the library.

Adapters Library Updates: ReFT, QLoRA, Merging, New Models & Hub   2024-08-10     Clifton Poth

  2024-08-10     Clifton Poth

Today we are releasing the newest updates in our Adapters library. This post summarizes new features in the latest release as well as selected new features since our initial release in Nov 2023, including new adapter methods, new supported models and Hub updates.

Introducing Adapters   2023-11-24     Hannah Sterz

  2023-11-24     Hannah Sterz

Introducing the new Adapters library the new package that supports adding parameter-efficient fine-tuning methods on top of transformers models and composition to achieve modular setups.

Updates in Adapter-Transformers v3.2   2023-03-03     Hannah Sterz

  2023-03-03     Hannah Sterz

With the newest release of our adapter-transformers library, version 3.2, we add composition blocks for prefix tuning and adapters to several new models.

Updates in Adapter-Transformers v3.1   2022-09-15     Clifton Poth

  2022-09-15     Clifton Poth

With the newest release of our adapter-transformers library, version 3.1, we take a further step towards integrating the diverse possibilities of parameter-efficient fine-tuning methods by supporting multiple new adapter methods and Transformer architectures.

Adapter-Transformers v3 - Unifying Efficient Fine-Tuning   2022-03-21     Clifton Poth

  2022-03-21     Clifton Poth

With the release of version 3.0 of adapter-transformers today, we're taking the first steps at integrating the grown and diversified landscape of efficient fine-tuning methods. Version 3.0 adds support for a first batch of recently proposed methods, including Prefix Tuning, Parallel adapters, Mix-and-Match adapters and Compacters. Further, improvements and changes to various aspects of the library are introduced.

Adapters for Generative and Seq2Seq Models in NLP   2021-04-29     Hannah Sterz*

  2021-04-29     Hannah Sterz*

Adapters have proven to be an efficient alternative to fully finetung models. The version 2.0 of the AdapterHub framework includes adapters for the BART and GPT2 models.

Version 2 of AdapterHub Released   2021-04-29     Clifton Poth

  2021-04-29     Clifton Poth

Today, we are releasing version 2 of the AdapterHub. This release introduces several exciting new ways for composing adapters through composition blocks, including AdapterFusion, parallel inference, Adapter stacking, and combinations thereof. Furthermore, we now support new Transformer architectures such as GPT-2 and BART.

Adapting Transformers with AdapterHub   2020-11-17     Clifton Poth

  2020-11-17     Clifton Poth

Adapters are a new, efficient and composable alternative to full fine-tuning of pre-trained language models. AdapterHub makes working with adapters accessible by providing a framework for training, sharing, discovering and consuming adapter modules. This post provides an extensive overview.

Paper

Brought to you with ❤️ by the AdapterHub Team