pytext.models.semantic_parsers.rnng package

Submodules

pytext.models.semantic_parsers.rnng.rnng_data_structures module

class pytext.models.semantic_parsers.rnng.rnng_data_structures.CompositionFunction[source]

Bases: torch.nn.modules.module.Module

class pytext.models.semantic_parsers.rnng.rnng_data_structures.CompositionalNN(lstm_dim)[source]

Bases: pytext.models.semantic_parsers.rnng.rnng_data_structures.CompositionFunction

forward(x)[source]

Embed the sequence. If the input corresponds to [IN:GL where am I at]: - x will contain the embeddings of [at I am where IN:GL] in that order. - Forward LSTM will embed the sequence [IN:GL where am I at]. - Backward LSTM will embed the sequence [IN:GL at I am where]. The final hidden states are concatenated and then projected.

Parameters:x – Embeddings of the input tokens in reversed order
class pytext.models.semantic_parsers.rnng.rnng_data_structures.CompositionalSummationNN(lstm_dim)[source]

Bases: pytext.models.semantic_parsers.rnng.rnng_data_structures.CompositionFunction

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pytext.models.semantic_parsers.rnng.rnng_data_structures.Element(node)[source]

Bases: object

class pytext.models.semantic_parsers.rnng.rnng_data_structures.ParserState(parser=None)[source]

Bases: object

copy()[source]
finished()[source]
class pytext.models.semantic_parsers.rnng.rnng_data_structures.StackLSTM(rnn, initial_state, p_empty_embedding)[source]

Bases: collections.abc.Sized

copy()[source]
ele_from_top(index: int) → pytext.models.semantic_parsers.rnng.rnng_data_structures.Element[source]
embedding()[source]
first_ele_match(funct)[source]
pop() → Tuple[Any, pytext.models.semantic_parsers.rnng.rnng_data_structures.Element][source]
push(expr, ele: pytext.models.semantic_parsers.rnng.rnng_data_structures.Element) → None[source]
top() → Tuple[Any, pytext.models.semantic_parsers.rnng.rnng_data_structures.Element][source]

pytext.models.semantic_parsers.rnng.rnng_parser module

class pytext.models.semantic_parsers.rnng.rnng_parser.RNNGParser(ablation: pytext.models.semantic_parsers.rnng.rnng_parser.RNNGParser.Config.AblationParams, constraints: pytext.models.semantic_parsers.rnng.rnng_parser.RNNGParser.Config.RNNGConstraints, lstm_num_layers: int, lstm_dim: int, max_open_NT: int, dropout: float, actions_vocab, shift_idx: int, reduce_idx: int, ignore_subNTs_roots: List[int], valid_NT_idxs: List[int], valid_IN_idxs: List[int], valid_SL_idxs: List[int], embedding: pytext.models.embeddings.embedding_list.EmbeddingList, p_compositional: pytext.models.semantic_parsers.rnng.rnng_data_structures.CompositionFunction)[source]

Bases: pytext.models.model.Model, pytext.config.component.Component

The Recurrent Neural Network Grammar (RNNG) parser from Dyer et al.: https://arxiv.org/abs/1602.07776 and Gupta et al.: https://arxiv.org/abs/1810.07942d. RNNG is a neural constituency parsing algorithm that explicitly models compositional structure of a sentence. It is able to learn about hierarchical relationship among the words and phrases in a given sentence thereby learning the underlying tree structure. The paper proposes generative as well as discriminative approaches. In PyText we have implemented the discriminative approach for modeling intent slot models. It is a top-down shift-reduce parser than can output trees with non-terminals (intent and slot labels) and terminals (tokens)

Config[source]

alias of RNNGParser.Config

contextualize(context)[source]

Add additional context into model. context can be anything that helps maintaining/updating state. For example, it is used by DisjointMultitaskModel for changing the task that should be trained with a given iterator.

forward(tokens: torch.Tensor, seq_lens: torch.Tensor, dict_feat: Optional[Tuple[torch.Tensor, ...]] = None, actions: Optional[List[List[int]]] = None, beam_size: int = 1, topk: int = 1)[source]

RNNG forward function.

Parameters:
  • tokens (torch.Tensor) – list of tokens
  • seq_lens (torch.Tensor) – list of sequence lengths
  • dict_feat (Optional[Tuple[torch.Tensor, ..]]) – dictionary or gazetteer features for each token
  • actions (Optional[List[List[int]]]) – Used only during training. Oracle actions for the instances.
  • beam_size (int) – Beam size; used only during inference
  • topk (int) – Number of top results from the method. If beam_size is 1 this is 1.
Returns:

if topk == 1

tuple of list of predicted actions and list of corresponding scores

else

list of tuple of list of predicted actions and list of corresponding scores

classmethod from_config(model_config, feature_config, metadata: pytext.data.data_handler.CommonMetadata)[source]
get_loss(logits: torch.Tensor, target_actions: torch.Tensor, context: torch.Tensor)[source]
get_param_groups_for_optimizer()[source]

This is called by code that looks for an instance of pytext.models.model.Model.

get_pred(logits: Tuple[torch.Tensor, torch.Tensor], *args)[source]
init_lstm()[source]
save_modules(*args, **kwargs)[source]

Save each sub-module in separate files for reusing later.

valid_actions(state: pytext.models.semantic_parsers.rnng.rnng_data_structures.ParserState) → List[int][source]

Used for restricting the set of possible action predictions

Parameters:state (ParserState) – The state of the stack, buffer and action
Returns:indices of the valid actions
Return type:List[int]

Module contents