"Adapter" refers to a set of newly introduced weights, typically within the layers of a transformer model. Adapters provide an alternative to fully fine-tuning the model for each downstream task, while maintaining performance. They also have the added benefit of requiring as little as 1MB of storage space per task! Learn More!

##### Built on HuggingFace 🤗 Transformers 🚀

AdapterHub builds on the HuggingFace transformers framework, requiring as little as two additional lines of code to train adapters for a downstream task.

# Quickstart 🔥

model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
model.set_active_adapters("sst-2")

The SST adapter is light-weight: it is only 3MB! At the same time, it achieves results that are on-par with fully fine-tuned BERT. We can now leverage SST adapter to predict the sentiment of sentences:

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
input_tensor = torch.tensor([
tokenizer.convert_tokens_to_ids(tokens)
])
outputs = model(input_tensor)

Training a new task adapter requires only few modifications compared to fully fine-tuning a model with Hugging Face's Trainer. We first load a pre-trained model, e.g., roberta-base and add a new task adapter:

model = AutoAdapterModel.from_pretrained('roberta-base')


By calling train_adapter("sst-2") we freeze all transformer parameters except for the parameters of sst-2 adapter. Before training we add a new classification head to our model:

model.add_classification_head("sst-2", num_labels=2)


The weights of this classification head can be stored together with the adapter weights to allow for a full reproducibility. The method call model.set_active_adapters("sst-2") registers the sst-2 adapter as a default for training. This also supports adapter stacking and adapter fusion!

We can then train our adapter using the Hugging Face Trainer:

trainer.train()
model.save_all_adapters('output-path')
💡
Tip 1️: Adapter weights are usually initialized randomly. That is why we require a higher learning rate. We have found that a default adapter learning rate of lr=0.0001 works well for most settings.
💡
Tip 2️: Depending on your data set size you might also need to train longer than usual. To avoid overfitting you can evaluating the adapters after each epoch on the development set and only save the best model.

That's it! model.save_all_adapters('output-path') exports all adapters. Consider sharing them on AdapterHub!

model = BertModelWithHeads.from_pretrained("bert-base-uncased")



On top of the loaded adapters, we add a new fusion layer using add_fusion(). For this purpose, we first define the adapter setup using the Fuse composition block. During training, only the weights of the fusion layer will be updated. We ensure this by first activating all adapters in the setup and then calling train_fusion():

adapter_setup = Fuse("multinli", "qqp", "qnli")


From here on, the training procedure is identical to training a single adapters or a full model. Check out the full working example in the Colab notebook.

AdapterDrop allows us to remove adapters on lower layers during training and inference. This can be realised with the skip_layers argument. It specifies for which layers the adapters should be skipped during a forward pass. In order to train a model with AdapterDrop, we specify a callback for the Trainer class that sets the skip_layers argument to the layers that should be skipped in each step as follows:

class AdapterDropTrainerCallback(TrainerCallback):
def on_step_begin(self, args, state, control, **kwargs):
skip_layers = list(range(np.random.randint(0, 11)))

def on_evaluate(self, args, state, control, **kwargs):
# Deactivate skipping layers during evaluation (otherwise it would use the
# previous randomly chosen skip_layers and thus yield results not comparable
# across different epochs)


Checkout the AdapterDrop Colab Notebook for further details.

### Parallel Inference️️

During inference, it might be beneficial to pass the input data through several different adapters to compare the results or predict different attributes in one forward pass. The Parallel Block enables us to do this. When the Parallel Block is used in combination with a ModelWithHeads class, each adapter also has a corresponding head.

model = AutoAdapterModel.from_pretrained("bert-base-uncased")


A forward pass through the model with the Parallel Block is equivalent to two single forward passes. One through the model with the task1 adapter and head activated and one through the model with the task2 adapter and head. The output is returned as a MultiHeadOutput, which acts as a list of the head outputs with an additional loss attribute. The loss attribute is the sum of the losses of individual outputs.

# Citation 📝

@inproceedings{pfeiffer2020AdapterHub,
author={Jonas Pfeiffer and
Andreas R\"uckl\'{e} and
Clifton Poth and
Aishwarya Kamath and
Ivan Vuli\'{c} and
Sebastian Ruder and
Kyunghyun Cho and
Iryna Gurevych},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations},
year={2020},
}