asreview.models.feature_extraction.SBERT

class asreview.models.feature_extraction.SBERT(*args, transformer_model='all-mpnet-base-v2', is_pretrained_sbert=True, pooling_mode='mean', **kwargs)[source]

Sentence BERT feature extraction technique (sbert).

By setting the transformer_model parameter, you can use other transformer models. For example, transformer_model='bert-base-nli-stsb- large'. For a list of available models, see the Sentence BERT documentation.

Sentence BERT is a sentence embedding model that is trained on a large corpus of human written text. It is a fast and accurate model that can be used for many tasks.

The huggingface library includes multilingual text classification models. If your dataset contains records with multiple languages, you can use the transformer_model parameter to select the model that is most suitable for your data.

Note

This feature extraction technique requires sentence_transformers to be installed. Use pip install asreview[sentence_transformers] or install all optional ASReview dependencies with pip install asreview[all] to install the package.

Parameters:
  • transformer_model (str, optional) – The transformer model to use. Default: ‘all-mpnet-base-v2’

  • is_pretrained_SBERT (boolean, optional) – Default: True

  • pooling_mode (str, optional) – Pooling mode to get sentence embeddings from word embeddings Default: ‘mean’ Other options available are ‘mean’, ‘max’ and ‘cls’. Only used if is_pretrained_SBERT=False mean: Uses mean pooling of word embeddings max: Uses max pooling of word embeddings cls: Uses embeddings of [CLS] token as sentence embeddings

Attributes

default_param

Get the default parameters of the model.

label

name

param

Get the (assigned) parameters of the model.

Methods

fit(texts)

Fit the model to the texts.

fit_transform(texts[, titles, abstracts, ...])

Fit and transform a list of texts.

full_hyper_space()

hyper_space()

transform(texts)

Transform a list of texts.