asreview.models.feature_extraction.SBERT

class asreview.models.feature_extraction.SBERT(*args, transformer_model='all-mpnet-base-v2', **kwargs)[source]

Sentence BERT feature extraction technique (sbert).

By setting the transformer_model parameter, you can use other transformer models. For example, transformer_model='bert-base-nli-stsb- large'. For a list of available models, see the Sentence BERT documentation.

Sentence BERT is a sentence embedding model that is trained on a large corpus of human written text. It is a fast and accurate model that can be used for many tasks.

The huggingface library includes multilingual text classification models. If your dataset contains records with multiple languages, you can use the transformer_model parameter to select the model that is most suitable for your data.

Note

This feature extraction technique requires sentence_transformers to be installed. Use pip install sentence_transformers or install all optional ASReview dependencies with pip install asreview[all] to install the package.

Parameters

transformer_model (str, optional) – The transformer model to use. Default: ‘all-mpnet-base-v2’

Attributes

default_param

Get the default parameters of the model.

label

name

param

Get the (assigned) parameters of the model.

Methods

fit(texts)

Fit the model to the texts.

fit_transform(texts[, titles, abstracts, ...])

Fit and transform a list of texts.

full_hyper_space()

hyper_space()

transform(texts)

Transform a list of texts.