Text Generation API

class TransformersTextGenerator[source]

TransformersTextGenerator(tokenizer:PreTrainedTokenizer, model:PreTrainedModel) :: AdaptiveModel

Adaptive model for Transformer's Language Models

Parameters:

  • tokenizer : <class 'transformers.tokenization_utils.PreTrainedTokenizer'>

    A tokenizer object from Huggingface's transformers (TODO)and tokenizers

  • model : <class 'transformers.modeling_utils.PreTrainedModel'>

    A transformers Language model

TransformersTextGenerator.load[source]

TransformersTextGenerator.load(model_name_or_path:str)

Class method for loading and constructing this Model

Parameters:

  • model_name_or_path : <class 'str'>

    A key string of one of Transformer's pre-trained Language Model

Returns:

  • <class 'adaptnlp.model.AdaptiveModel'>
o = TransformersTextGenerator.load('gpt2')



Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

TransformersTextGenerator.predict[source]

TransformersTextGenerator.predict(text:Union[List[str], str], mini_batch_size:int=32, num_tokens_to_produce:int=50)

Predict method for running inference using the pre-trained sequence classifier model. Keyword arguments for parameters of the method Transformers.PreTrainedModel.generate() can be used as well.

Parameters:

  • text : typing.Union[typing.List[str], str]

    Sentences to run inference on

  • mini_batch_size : <class 'int'>, optional

    Mini batch size

  • num_tokens_to_produce : <class 'int'>, optional

    Number of tokens you want to generate

Returns:

  • typing.List[str]

    A list of predicted sentences

class EasyTextGenerator[source]

EasyTextGenerator()

Text Generation Module

EasyTextGenerator.generate[source]

EasyTextGenerator.generate(text:Union[List[str], str], model_name_or_path:HFModelResult'>]='gpt2', mini_batch_size:int=32, num_tokens_to_produce:int=50)

Predict method for running inference using the pre-trained sequence classifier model. Keyword arguments for parameters of the method Transformers.PreTrainedModel.generate() can be used as well.

Parameters:

  • text : typing.Union[typing.List[str], str]

    List of sentences to run inference on

  • model_name_or_path : [<class 'str'>, <class 'adaptnlp.model_hub.HFModelResult'>], optional

    A model id or path to a pre-trained model repository or custom trained model directory

  • mini_batch_size : <class 'int'>, optional

    Mini batch size

  • num_tokens_to_produce : <class 'int'>, optional

    Number of tokens you want to generate

Returns:

  • typing.List[str]

    A list of predicted sentences