Translation API for AdaptNLP

class TranslationResult[source]

TranslationResult(inputs:List[str], input_lang:str, output_lang:str, translations:List[str])

A basic result class for Translation problems

Parameters:

  • inputs : typing.List[str]

    A list of input string sentences

  • input_lang : <class 'str'>

    A input language

  • output_lang : <class 'str'>

    An output language

  • translations : typing.List[str]

    A list of the translated sentences

TranslationResult.to_dict[source]

TranslationResult.to_dict(detail_level:DetailLevel='low')

Convert self to a filtered dictionary

Parameters:

  • detail_level : <class 'fastcore.basics.DetailLevel'>, optional

    A detail level to return

class TransformersTranslator[source]

TransformersTranslator(tokenizer:PreTrainedTokenizer, model:PreTrainedModel) :: AdaptiveModel

Adaptive model for Transformer's Conditional Generation or Language Models (Transformer's T5 and Bart conditional generation models have a language modeling head)

Parameters:

  • tokenizer : <class 'transformers.tokenization_utils.PreTrainedTokenizer'>

    A tokenizer object from Huggingface's transformers (TODO)and tokenizers

  • model : <class 'transformers.modeling_utils.PreTrainedModel'>

    A transformers Conditional Generation (Bart or T5) or Language model

TransformersTranslator.load[source]

TransformersTranslator.load(model_name_or_path:str)

Class method for loading and constructing this classifier

Parameters:

  • model_name_or_path : <class 'str'>

    A key string of one of Transformer's pre-trained translator Model

Returns:

  • <class 'adaptnlp.model.AdaptiveModel'>

TransformersTranslator.predict[source]

TransformersTranslator.predict(text:Union[List[str], str], t5_prefix:str='translate English to German', mini_batch_size:int=32, num_beams:int=1, min_length:int=0, max_length:int=128, early_stopping:bool=True, detail_level:DetailLevel='low', **kwargs)

Predict method for running inference using the pre-trained sequence classifier model. Keyword arguments for parameters of the method Transformers.PreTrainedModel.generate() can be used as well

Parameters:

  • text : typing.Union[typing.List[str], str]

    Sentences to run inference on

  • t5_prefix : <class 'str'>, optional

    The pre-appended prefix for the specificied task. Only in use for T5-type models.

  • mini_batch_size : <class 'int'>, optional

    Mini batch size

  • num_beams : <class 'int'>, optional

    Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search

  • min_length : <class 'int'>, optional

    The min length of the sequence to be generated

  • max_length : <class 'int'>, optional

    The max length of the sequence to be generated. Between min_length and infinity

  • early_stopping : <class 'bool'>, optional

    If set to `True` beam search is stopped when at least num_beams sentences finished per batch

  • detail_level : <class 'fastcore.basics.DetailLevel'>, optional

    The level of detail to return

  • kwargs : <class 'inspect._empty'>

Returns:

  • typing.List[str]

    A list of translated sentences

class EasyTranslator[source]

EasyTranslator()

Translation Module

EasyTranslator.translate[source]

EasyTranslator.translate(text:Union[List[str], str], model_name_or_path:str='t5-small', t5_prefix:str='translate English to German', detail_level='low', mini_batch_size:int=32, num_beams:int=1, min_length:int=0, max_length:int=128, early_stopping:bool=True, **kwargs)

Predict method for running inference using the pre-trained sequence classifier model. Keyword arguments for parameters of the method Transformers.PreTrainedModel.generate() can be used as well.

Parameters:

  • text : typing.Union[typing.List[str], str]

    Sentences to run inference on

  • model_name_or_path : <class 'str'>, optional

    A model id or path to a pre-trained model repository or custom trained model directory

  • t5_prefix : <class 'str'>, optional

    The pre-appended prefix for the specificied task. Only in use for T5-type models

  • detail_level : <class 'str'>, optional

    The level of detail to return

  • mini_batch_size : <class 'int'>, optional

    Mini batch size

  • num_beams : <class 'int'>, optional

    Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search

  • min_length : <class 'int'>, optional

    The min length of the sequence to be generated

  • max_length : <class 'int'>, optional

    The max length of the sequence to be generated. Between min_length and infinity

  • early_stopping : <class 'bool'>, optional

    If set to `True` beam search is stopped when at least num_beams sentences finished per batch

  • kwargs : <class 'inspect._empty'>

Returns:

  • typing.List[str]

    Optional arguments for the Transformers `PreTrainedModel.generate()` method