pytext package

Subpackages

Submodules

pytext.builtin_task module

pytext.builtin_task.register_builtin_tasks()[source]

pytext.main module

class pytext.main.Attrs[source]

Bases: object

pytext.main.gen_config_impl(task_name, options)[source]
pytext.main.run_single(rank: int, config_json: str, world_size: int, dist_init_method: str, summary_writer: tensorboardX.writer.SummaryWriter, metadata: pytext.data.data_handler.CommonMetadata)[source]
pytext.main.train_model_distributed(config, summary_writer)[source]

pytext.workflow module

pytext.workflow.batch_predict(model_file: str, examples: List[Dict[str, Any]])[source]
pytext.workflow.export_saved_model_to_caffe2(saved_model_path: str, export_caffe2_path: str, output_onnx_path: str = None) → None[source]
pytext.workflow.prepare_task(config: pytext.config.pytext_config.PyTextConfig, dist_init_url: str = None, device_id: int = 0, rank: int = 0, world_size: int = 1, summary_writer: Optional[tensorboardX.writer.SummaryWriter] = None, metadata: pytext.data.data_handler.CommonMetadata = None) → pytext.task.task.Task[source]
pytext.workflow.prepare_task_metadata(config: pytext.config.pytext_config.PyTextConfig) → pytext.data.data_handler.CommonMetadata[source]

Loading the whole dataset into cpu memory on every single processes could cause OOMs for data parallel distributed training. To avoid such practice, we move the operations that required loading the whole dataset out of spawn, and pass the context to every single process.

pytext.workflow.save_and_export(config: pytext.config.pytext_config.PyTextConfig, task: pytext.task.task.Task, summary_writer: Optional[tensorboardX.writer.SummaryWriter] = None) → None[source]
pytext.workflow.test_model(test_config: pytext.config.pytext_config.TestConfig, summary_writer: Optional[tensorboardX.writer.SummaryWriter] = None) → Any[source]
pytext.workflow.test_model_from_snapshot_path(snapshot_path: str, use_cuda_if_available: bool, test_path: Optional[str] = None, summary_writer: Optional[tensorboardX.writer.SummaryWriter] = None)[source]
pytext.workflow.train_model()[source]

Module contents

pytext.create_predictor(config: pytext.config.pytext_config.PyTextConfig, model_file: Optional[str] = None) → Callable[Mapping[str, str], Mapping[str, numpy.core.multiarray.array]][source]

Create a simple prediction API from a training config and an exported caffe2 model file. This model file should be created by calling export on a trained model snapshot.

pytext.load_config(filename: str) → pytext.config.pytext_config.PyTextConfig[source]

Load a PyText configuration file from a file path. See pytext.config.pytext_config for more info on configs.