2022-09-15 Clifton Poth
With the newest release of our adapter-transformers library, version 3.1, we take a further step towards integrating the diverse possibilities of parameter-efficient fine-tuning methods by supporting multiple new adapter methods and Transformer architectures.
2022-03-21 Clifton Poth
With the release of version 3.0 of adapter-transformers today, we're taking the first steps at integrating the grown and diversified landscape of efficient fine-tuning methods. Version 3.0 adds support for a first batch of recently proposed methods, including Prefix Tuning, Parallel adapters, Mix-and-Match adapters and Compacters. Further, improvements and changes to various aspects of the library are introduced.
2021-04-29 Clifton Poth
Today, we are releasing version 2 of the AdapterHub. This release introduces several exciting new ways for composing adapters through composition blocks, including AdapterFusion, parallel inference, Adapter stacking, and combinations thereof. Furthermore, we now support new Transformer architectures such as GPT-2 and BART.
2021-04-29 Hannah Sterz*
Adapters have proven to be an efficient alternative to fully finetung models. The version 2.0 of the AdapterHub framework includes adapters for the BART and GPT2 models.
2020-11-17 Clifton Poth
Adapters are a new, efficient and composable alternative to full fine-tuning of pre-trained language models. AdapterHub makes working with adapters accessible by providing a framework for training, sharing, discovering and consuming adapter modules. This post provides an extensive overview.