model = AutoAdapterModel.from_pretrained("roberta-base") model.load_adapter("solwol/my-awesome-adapter", source="hf")
solwol/my-awesome-adapter
for roberta-baseAn adapter for the roberta-base
model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
First, install transformers
and adapters
:
pip install -U transformers adapters
Now, the adapter can be loaded and activated like this:
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("solwol/my-awesome-adapter", source="hf", set_active=True)
adapter_name
Next, to perform sentiment classification:
from transformers import AutoTokenizer, TextClassificationPipeline
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
classfifier("Adapters are awesome!")
{ "adapter_residual_before_ln": false, "cross_adapter": false, "factorized_phm_W": true, "factorized_phm_rule": false, "hypercomplex_nonlinearity": "glorot-uniform", "init_weights": "bert", "inv_adapter": null, "inv_adapter_reduction_factor": null, "is_parallel": false, "learn_phm": true, "leave_out": [], "ln_after": false, "ln_before": false, "mh_adapter": false, "non_linearity": "relu", "original_ln_after": true, "original_ln_before": true, "output_adapter": true, "phm_bias": true, "phm_c_init": "normal", "phm_dim": 4, "phm_init_range": 0.0001, "phm_layer": false, "phm_rank": 1, "reduction_factor": 16, "residual_before_ln": true, "scaling": 1.0, "shared_W_phm": false, "shared_phm_rule": true, "use_gating": false }
@inproceedings{Pang+Lee:05a, author = {Bo Pang and Lillian Lee}, title = {Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales}, year = {2005}, pages = {115--124}, booktitle = {Proceedings of ACL} }