pytext.models.output_layers package

Submodules

pytext.models.output_layers.doc_classification_output_layer module

class pytext.models.output_layers.doc_classification_output_layer.ClassificationOutputLayer(target_names: Optional[List[str]] = None, loss_fn: Optional[pytext.loss.loss.Loss] = None, *args, **kwargs)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for document classification models. It supports CrossEntropyLoss and BinaryCrossEntropyLoss per document.

Parameters:loss_fn (Union[CrossEntropyLoss, BinaryCrossEntropyLoss]) – The loss function to use for computing loss. Defaults to None.
loss_fn

The loss function to use for computing loss.

Config[source]

alias of ClassificationOutputLayer.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the doc classification layer to Caffe2. See OutputLayerBase.export_to_caffe2() for details.

classmethod from_config(config: pytext.models.output_layers.doc_classification_output_layer.ClassificationOutputLayer.Config, metadata: pytext.fields.field.FieldMeta)[source]
get_pred(logit, *args, **kwargs)[source]

Compute and return prediction and scores from the model.

Prediction is computed using argmax over the document label/target space.

Scores are sigmoid or softmax scores over the model logits depending on the loss component being used.

Parameters:logit (torch.Tensor) – Logits returned DocModel.
Returns:Model prediction and scores.
Return type:Tuple[torch.Tensor, torch.Tensor]

pytext.models.output_layers.intent_slot_output_layer module

class pytext.models.output_layers.intent_slot_output_layer.IntentSlotOutputLayer(doc_output: pytext.models.output_layers.doc_classification_output_layer.ClassificationOutputLayer, word_output: pytext.models.output_layers.word_tagging_output_layer.WordTaggingOutputLayer)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for joint intent classification and slot-filling models. Intent classification is a document classification problem and slot filling is a word tagging problem. Thus terms these can be used interchangeably in the documentation.

Parameters:
  • doc_output (ClassificationOutputLayer) – Output layer for intent classification task. See ClassificationOutputLayer for details.
  • word_output (WordTaggingOutputLayer) – Output layer for slot filling task. See WordTaggingOutputLayer for details.
doc_output

type – Output layer for intent classification task.

word_output

type – Output layer for slot filling task.

Config[source]

alias of IntentSlotOutputLayer.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: List[torch.Tensor], doc_out_name: str, word_out_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the intent slot output layer to Caffe2. See OutputLayerBase.export_to_caffe2() for details.

classmethod from_config(config: pytext.models.output_layers.intent_slot_output_layer.IntentSlotOutputLayer.Config, doc_meta: pytext.fields.field.FieldMeta, word_meta: pytext.fields.field.FieldMeta)[source]
get_loss(logits: Tuple[torch.Tensor, torch.Tensor], targets: Tuple[torch.Tensor, torch.Tensor], context: Dict[str, Any] = None, *args, **kwargs) → torch.Tensor[source]

Compute and return the averaged intent and slot-filling loss.

Parameters:
  • logit (Tuple[torch.Tensor, torch.Tensor]) – Logits returned by JointModel. It is a tuple containing logits for intent classification and slot filling.
  • targets (Tuple[torch.Tensor, torch.Tensor]) – Tuple of target Tensors containing true document label/target and true word labels/targets.
  • context (Dict[str, Any]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
Returns:

Averaged intent and slot loss.

Return type:

torch.Tensor

get_pred(logits: Tuple[torch.Tensor, torch.Tensor], targets: Optional[torch.Tensor] = None, context: Optional[Dict[str, Any]] = None) → Tuple[torch.Tensor, torch.Tensor][source]

Compute and return prediction and scores from the model.

Parameters:
  • logit (Tuple[torch.Tensor, torch.Tensor]) – Logits returned by JointModel. It’s tuple containing logits for intent classification and slot filling.
  • targets (Optional[torch.Tensor]) – Not applicable. Defaults to None.
  • context (Optional[Dict[str, Any]]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

pytext.models.output_layers.lm_output_layer module

class pytext.models.output_layers.lm_output_layer.LMOutputLayer(target_names: List[str], loss_fn: pytext.loss.loss.Loss = None, config=None, pad_token_idx=-100)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for language models. It supports CrossEntropyLoss per word.

Parameters:loss_fn (CrossEntropyLoss) – Cross-entropy loss component. Defaults to None.
loss_fn

Cross-entropy loss component for computing loss.

Config[source]

alias of LMOutputLayer.Config

static calculate_perplexity(sequence_loss: torch.Tensor) → torch.Tensor[source]
classmethod from_config(config: pytext.models.output_layers.lm_output_layer.LMOutputLayer.Config, metadata: pytext.fields.field.FieldMeta)[source]
get_loss(logit: torch.Tensor, target: torch.Tensor, context: Dict[str, Any], reduce=True) → torch.Tensor[source]

Compute word prediction loss by comparing prediction of each word in the sentence with the true word.

Parameters:
  • logit (torch.Tensor) – Logit returned by LMLSTM.
  • targets (torch.Tensor) – Not applicable for language models.
  • context (Dict[str, Any]) – Not applicable. Defaults to None.
  • reduce (bool) – Whether to reduce loss over the batch. Defaults to True.
Returns:

Word prediction loss.

Return type:

torch.Tensor

get_pred(logit: torch.Tensor, target: torch.Tensor, context: Dict[str, Any]) → Tuple[torch.Tensor, torch.Tensor][source]

Compute and return prediction and scores from the model. Prediction is computed using argmax over the word label/target space. Scores are softmax scores over the model logits.

Parameters:
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

pytext.models.output_layers.output_layer_base module

class pytext.models.output_layers.output_layer_base.OutputLayerBase(target_names: Optional[List[str]] = None, loss_fn: Optional[pytext.loss.loss.Loss] = None, *args, **kwargs)[source]

Bases: pytext.models.module.Module

Base class for all output layers in PyText. The responsibilities of this layer are

  1. Implement how loss is computed from logits and targets.
  2. Implement how to get predictions from logits.
  3. Implement the Caffe2 operator for performing the above tasks. This is
    used when PyText exports PyTorch model to Caffe2.
Parameters:loss_fn (type) – The loss function object to use for computing loss. Defaults to None.
loss_fn

The loss function object to use for computing loss.

Config

alias of pytext.config.component.ComponentMeta.__new__.<locals>.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the output layer to Caffe2 by manually adding the necessary operators to the init_net and predict_net and, returns the list of external output blobs to be added to the model. By default this does nothing, so any sub-class must override this method (if necessary).

To learn about Caffe2 computation graphs and why we need two networks, init_net and predict_net/exec_net read https://caffe2.ai/docs/intro-tutorial#null__nets-and-operators.

Parameters:
  • workspace (core.workspace) – Caffe2 workspace to use for adding the operator. See https://caffe2.ai/docs/workspace.html to learn about Caffe2 workspace.
  • init_net (core.Net) – Caffe2 init_net to add the operator to.
  • predict_net (core.Net) – Caffe2 predict_net to add the operator to.
  • model_out (torch.Tensor) – Output logit Tensor from the model to .
  • output_name (str) – Name of model_out to use in Caffe2 net.
  • label_names (List[str]) – List of names of the targets/labels to expose from the Caffe2 net.
Returns:

List of output blobs that the output_layer

generates.

Return type:

List[core.BlobReference]

classmethod from_config(config, metadata: pytext.fields.field.FieldMeta)[source]
get_loss(logit: torch.Tensor, target: torch.Tensor, context: Optional[Dict[str, Any]] = None, reduce: bool = True) → torch.Tensor[source]

Compute and return the loss given logits and targets.

Parameters:
  • logit (torch.Tensor) – Logits returned Model.
  • target (torch.Tensor) – True label/target to compute loss against.
  • context (Optional[Dict[str, Any]]) – Context is a dictionary of items that’s passed as additional metadata by the DataHandler. Defaults to None.
  • reduce (bool) – Whether to reduce loss over the batch. Defaults to True.
Returns:

Model loss.

Return type:

torch.Tensor

get_pred(logit: torch.Tensor, targets: Optional[torch.Tensor] = None, context: Optional[Dict[str, Any]] = None) → Tuple[torch.Tensor, torch.Tensor][source]

Compute and return prediction and scores from the model.

Parameters:
  • logit (torch.Tensor) – Logits returned Model.
  • targets (Optional[torch.Tensor]) – True label/target. Only used by LMOutputLayer. Defaults to None.
  • context (Optional[Dict[str, Any]]) – Context is a dictionary of items that’s passed as additional metadata by the DataHandler. Defaults to None.
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

pytext.models.output_layers.utils module

class pytext.models.output_layers.utils.OutputLayerUtils[source]

Bases: object

static gen_additional_blobs(predict_net: caffe2.python.core.Net, probability_out, model_out: torch.Tensor, output_name: str, label_names: List[str]) → List[caffe2.python.core.BlobReference][source]

Utility method to generate additional blobs for human readable result for models that use explicit labels.

pytext.models.output_layers.word_tagging_output_layer module

class pytext.models.output_layers.word_tagging_output_layer.CRFOutputLayer(num_tags, *args)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for word tagging models that use Conditional Random Field.

Parameters:num_tags (int) – Total number of possible word tags.
num_tags

Total number of possible word tags.

Config

alias of pytext.config.component.ComponentMeta.__new__.<locals>.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the CRF output layer to Caffe2. See OutputLayerBase.export_to_caffe2() for details.

classmethod from_config(config: pytext.config.component.ComponentMeta.__new__.<locals>.Config, metadata: pytext.fields.field.FieldMeta)[source]
get_loss(logit: torch.Tensor, target: torch.Tensor, context: Dict[str, Any], reduce=True)[source]

Compute word tagging loss by using CRF.

Parameters:
  • logit (torch.Tensor) – Logit returned by WordTaggingModel.
  • targets (torch.Tensor) – True document label/target.
  • context (Dict[str, Any]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
  • reduce (bool) – Whether to reduce loss over the batch. Defaults to True.
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

get_pred(logit: torch.Tensor, target: Optional[torch.Tensor] = None, context: Optional[Dict[str, Any]] = None)[source]

Compute and return prediction and scores from the model.

Prediction is computed using CRF decoding.

Scores are softmax scores over the model logits where the logits are computed by rearranging the word logits such that decoded word tag has the highest valued logits. This is done because with CRF, the highest valued word tag for a given may not be part of the overall set of word tags. In order for argmax to work, we rearrange the logit values.

Parameters:
  • logit (torch.Tensor) – Logits returned WordTaggingModel.
  • target (torch.Tensor) – Not applicable. Defaults to None.
  • context (Optional[Dict[str, Any]]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

class pytext.models.output_layers.word_tagging_output_layer.WordTaggingOutputLayer(target_names: Optional[List[str]] = None, loss_fn: Optional[pytext.loss.loss.Loss] = None, *args, **kwargs)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for word tagging models. It supports CrossEntropyLoss per word.

Parameters:loss_fn (CrossEntropyLoss) – Cross-entropy loss component. Defaults to None.
loss_fn

Cross-entropy loss component.

Config[source]

alias of WordTaggingOutputLayer.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the word tagging output layer to Caffe2.

classmethod from_config(config: pytext.models.output_layers.word_tagging_output_layer.WordTaggingOutputLayer.Config, metadata: pytext.fields.field.FieldMeta)[source]
get_loss(logit: torch.Tensor, target: torch.Tensor, context: Dict[str, Any], reduce: bool = True) → torch.Tensor[source]

Compute word tagging loss by comparing prediction of each word in the sentence with its true label/target.

Parameters:
  • logit (torch.Tensor) – Logit returned by WordTaggingModel.
  • targets (torch.Tensor) – True document label/target.
  • context (Dict[str, Any]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
  • reduce (bool) – Whether to reduce loss over the batch. Defaults to True.
Returns:

Word tagging loss for all words in the sentence.

Return type:

torch.Tensor

get_pred(logit: torch.Tensor, *args, **kwargs) → Tuple[torch.Tensor, torch.Tensor][source]

Compute and return prediction and scores from the model. Prediction is computed using argmax over the word label/target space. Scores are softmax scores over the model logits.

Parameters:logit (torch.Tensor) – Logits returned WordTaggingModel.
Returns:Model prediction and scores.
Return type:Tuple[torch.Tensor, torch.Tensor]

Module contents

class pytext.models.output_layers.OutputLayerBase(target_names: Optional[List[str]] = None, loss_fn: Optional[pytext.loss.loss.Loss] = None, *args, **kwargs)[source]

Bases: pytext.models.module.Module

Base class for all output layers in PyText. The responsibilities of this layer are

  1. Implement how loss is computed from logits and targets.
  2. Implement how to get predictions from logits.
  3. Implement the Caffe2 operator for performing the above tasks. This is
    used when PyText exports PyTorch model to Caffe2.
Parameters:loss_fn (type) – The loss function object to use for computing loss. Defaults to None.
loss_fn

The loss function object to use for computing loss.

Config

alias of pytext.config.component.ComponentMeta.__new__.<locals>.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the output layer to Caffe2 by manually adding the necessary operators to the init_net and predict_net and, returns the list of external output blobs to be added to the model. By default this does nothing, so any sub-class must override this method (if necessary).

To learn about Caffe2 computation graphs and why we need two networks, init_net and predict_net/exec_net read https://caffe2.ai/docs/intro-tutorial#null__nets-and-operators.

Parameters:
  • workspace (core.workspace) – Caffe2 workspace to use for adding the operator. See https://caffe2.ai/docs/workspace.html to learn about Caffe2 workspace.
  • init_net (core.Net) – Caffe2 init_net to add the operator to.
  • predict_net (core.Net) – Caffe2 predict_net to add the operator to.
  • model_out (torch.Tensor) – Output logit Tensor from the model to .
  • output_name (str) – Name of model_out to use in Caffe2 net.
  • label_names (List[str]) – List of names of the targets/labels to expose from the Caffe2 net.
Returns:

List of output blobs that the output_layer

generates.

Return type:

List[core.BlobReference]

classmethod from_config(config, metadata: pytext.fields.field.FieldMeta)[source]
get_loss(logit: torch.Tensor, target: torch.Tensor, context: Optional[Dict[str, Any]] = None, reduce: bool = True) → torch.Tensor[source]

Compute and return the loss given logits and targets.

Parameters:
  • logit (torch.Tensor) – Logits returned Model.
  • target (torch.Tensor) – True label/target to compute loss against.
  • context (Optional[Dict[str, Any]]) – Context is a dictionary of items that’s passed as additional metadata by the DataHandler. Defaults to None.
  • reduce (bool) – Whether to reduce loss over the batch. Defaults to True.
Returns:

Model loss.

Return type:

torch.Tensor

get_pred(logit: torch.Tensor, targets: Optional[torch.Tensor] = None, context: Optional[Dict[str, Any]] = None) → Tuple[torch.Tensor, torch.Tensor][source]

Compute and return prediction and scores from the model.

Parameters:
  • logit (torch.Tensor) – Logits returned Model.
  • targets (Optional[torch.Tensor]) – True label/target. Only used by LMOutputLayer. Defaults to None.
  • context (Optional[Dict[str, Any]]) – Context is a dictionary of items that’s passed as additional metadata by the DataHandler. Defaults to None.
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

class pytext.models.output_layers.CRFOutputLayer(num_tags, *args)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for word tagging models that use Conditional Random Field.

Parameters:num_tags (int) – Total number of possible word tags.
num_tags

Total number of possible word tags.

Config

alias of pytext.config.component.ComponentMeta.__new__.<locals>.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the CRF output layer to Caffe2. See OutputLayerBase.export_to_caffe2() for details.

classmethod from_config(config: pytext.config.component.ComponentMeta.__new__.<locals>.Config, metadata: pytext.fields.field.FieldMeta)[source]
get_loss(logit: torch.Tensor, target: torch.Tensor, context: Dict[str, Any], reduce=True)[source]

Compute word tagging loss by using CRF.

Parameters:
  • logit (torch.Tensor) – Logit returned by WordTaggingModel.
  • targets (torch.Tensor) – True document label/target.
  • context (Dict[str, Any]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
  • reduce (bool) – Whether to reduce loss over the batch. Defaults to True.
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

get_pred(logit: torch.Tensor, target: Optional[torch.Tensor] = None, context: Optional[Dict[str, Any]] = None)[source]

Compute and return prediction and scores from the model.

Prediction is computed using CRF decoding.

Scores are softmax scores over the model logits where the logits are computed by rearranging the word logits such that decoded word tag has the highest valued logits. This is done because with CRF, the highest valued word tag for a given may not be part of the overall set of word tags. In order for argmax to work, we rearrange the logit values.

Parameters:
  • logit (torch.Tensor) – Logits returned WordTaggingModel.
  • target (torch.Tensor) – Not applicable. Defaults to None.
  • context (Optional[Dict[str, Any]]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
Returns:

Model prediction and scores.

Return type:

Tuple[torch.Tensor, torch.Tensor]

class pytext.models.output_layers.ClassificationOutputLayer(target_names: Optional[List[str]] = None, loss_fn: Optional[pytext.loss.loss.Loss] = None, *args, **kwargs)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for document classification models. It supports CrossEntropyLoss and BinaryCrossEntropyLoss per document.

Parameters:loss_fn (Union[CrossEntropyLoss, BinaryCrossEntropyLoss]) – The loss function to use for computing loss. Defaults to None.
loss_fn

The loss function to use for computing loss.

Config[source]

alias of ClassificationOutputLayer.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the doc classification layer to Caffe2. See OutputLayerBase.export_to_caffe2() for details.

classmethod from_config(config: pytext.models.output_layers.doc_classification_output_layer.ClassificationOutputLayer.Config, metadata: pytext.fields.field.FieldMeta)[source]
get_pred(logit, *args, **kwargs)[source]

Compute and return prediction and scores from the model.

Prediction is computed using argmax over the document label/target space.

Scores are sigmoid or softmax scores over the model logits depending on the loss component being used.

Parameters:logit (torch.Tensor) – Logits returned DocModel.
Returns:Model prediction and scores.
Return type:Tuple[torch.Tensor, torch.Tensor]
class pytext.models.output_layers.WordTaggingOutputLayer(target_names: Optional[List[str]] = None, loss_fn: Optional[pytext.loss.loss.Loss] = None, *args, **kwargs)[source]

Bases: pytext.models.output_layers.output_layer_base.OutputLayerBase

Output layer for word tagging models. It supports CrossEntropyLoss per word.

Parameters:loss_fn (CrossEntropyLoss) – Cross-entropy loss component. Defaults to None.
loss_fn

Cross-entropy loss component.

Config[source]

alias of WordTaggingOutputLayer.Config

export_to_caffe2(workspace: <module 'caffe2.python.workspace' from '/home/docs/checkouts/readthedocs.org/user_builds/pytext-pytext/envs/latest/lib/python3.6/site-packages/caffe2/python/workspace.py'>, init_net: caffe2.python.core.Net, predict_net: caffe2.python.core.Net, model_out: torch.Tensor, output_name: str) → List[caffe2.python.core.BlobReference][source]

Exports the word tagging output layer to Caffe2.

classmethod from_config(config: pytext.models.output_layers.word_tagging_output_layer.WordTaggingOutputLayer.Config, metadata: pytext.fields.field.FieldMeta)[source]
get_loss(logit: torch.Tensor, target: torch.Tensor, context: Dict[str, Any], reduce: bool = True) → torch.Tensor[source]

Compute word tagging loss by comparing prediction of each word in the sentence with its true label/target.

Parameters:
  • logit (torch.Tensor) – Logit returned by WordTaggingModel.
  • targets (torch.Tensor) – True document label/target.
  • context (Dict[str, Any]) – Context is a dictionary of items that’s passed as additional metadata by the JointModelDataHandler. Defaults to None.
  • reduce (bool) – Whether to reduce loss over the batch. Defaults to True.
Returns:

Word tagging loss for all words in the sentence.

Return type:

torch.Tensor

get_pred(logit: torch.Tensor, *args, **kwargs) → Tuple[torch.Tensor, torch.Tensor][source]

Compute and return prediction and scores from the model. Prediction is computed using argmax over the word label/target space. Scores are softmax scores over the model logits.

Parameters:logit (torch.Tensor) – Logits returned WordTaggingModel.
Returns:Model prediction and scores.
Return type:Tuple[torch.Tensor, torch.Tensor]
class pytext.models.output_layers.OutputLayerUtils[source]

Bases: object

static gen_additional_blobs(predict_net: caffe2.python.core.Net, probability_out, model_out: torch.Tensor, output_name: str, label_names: List[str]) → List[caffe2.python.core.BlobReference][source]

Utility method to generate additional blobs for human readable result for models that use explicit labels.