AdapterHub
  •   Explore
  •   Upload
  •   Docs
  •   Blog
  •  
  •  
  Feed

Blog

Updates in Adapter-Transformers v3.2   2023-03-03     Hannah Sterz

  2023-03-03     Hannah Sterz

With the newest release of our adapter-transformers library, version 3.2, we add composition blocks for prefix tuning and adapters to several new models.

Updates in Adapter-Transformers v3.1   2022-09-15     Clifton Poth

  2022-09-15     Clifton Poth

With the newest release of our adapter-transformers library, version 3.1, we take a further step towards integrating the diverse possibilities of parameter-efficient fine-tuning methods by supporting multiple new adapter methods and Transformer architectures.

Adapter-Transformers v3 - Unifying Efficient Fine-Tuning   2022-03-21     Clifton Poth

  2022-03-21     Clifton Poth

With the release of version 3.0 of adapter-transformers today, we're taking the first steps at integrating the grown and diversified landscape of efficient fine-tuning methods. Version 3.0 adds support for a first batch of recently proposed methods, including Prefix Tuning, Parallel adapters, Mix-and-Match adapters and Compacters. Further, improvements and changes to various aspects of the library are introduced.

Adapters for Generative and Seq2Seq Models in NLP   2021-04-29     Hannah Sterz*

  2021-04-29     Hannah Sterz*

Adapters have proven to be an efficient alternative to fully finetung models. The version 2.0 of the AdapterHub framework includes adapters for the BART and GPT2 models.

Version 2 of AdapterHub Released   2021-04-29     Clifton Poth

  2021-04-29     Clifton Poth

Today, we are releasing version 2 of the AdapterHub. This release introduces several exciting new ways for composing adapters through composition blocks, including AdapterFusion, parallel inference, Adapter stacking, and combinations thereof. Furthermore, we now support new Transformer architectures such as GPT-2 and BART.

Adapting Transformers with AdapterHub   2020-11-17     Clifton Poth

  2020-11-17     Clifton Poth

Adapters are a new, efficient and composable alternative to full fine-tuning of pre-trained language models. AdapterHub makes working with adapters accessible by providing a framework for training, sharing, discovering and consuming adapter modules. This post provides an extensive overview.

Paper | Imprint & Privacy

Brought to you with ❤️  by authors from: