pytext.optimizer package

Submodules

pytext.optimizer.scheduler module

class pytext.optimizer.scheduler.LmFineTuning(optimizer, cut_frac=0.1, ratio=32, non_pretrained_param_groups=2, lm_lr_multiplier=1.0, lm_use_per_layer_lr=False, lm_gradual_unfreezing=True, last_epoch=-1)[source]

Bases: torch.optim.lr_scheduler._LRScheduler

Fine-tuning methods from the paper “[arXiv:1801.06146]Universal Language Model Fine-tuning for Text Classification”.

Specifically, modifies training schedule using slanted triangular learning rates, discriminative fine-tuning (per-layer learning rates), and gradual unfreezing.

Parameters:
  • optimizer (Optimizer) – Wrapped optimizer.
  • cut_frac – the fraction of iterations we increase the learning rate. Default 0.1
  • ratio (int) – how much smaller the lowest LR is from the maximum LR eta_max. Default: 32.
  • non_pretrained_param_groups (int) – Number of param_groups, starting from the end, that were not pretrained. The default value is 2, since the base Model class supplies to the optimizer typically one param_group from the embedding and one param_group from its other components.
  • lm_lr_multiplier (float) – Factor to multiply lr for all pretrained layers by.
  • lm_use_per_layer_lr (bool) – Whether to make each pretrained layer’s lr one-half as large as the next (higher) layer.
  • lm_gradual_unfreezing (bool) – Whether to unfreeze layers one by one (per epoch).
  • last_epoch (int) – Though the name is last_epoch, it means last batch update. last_batch_update: = current_epoch_number * num_batches_per_epoch + batch_id after each batch update, it will increment 1
get_lr()[source]
class pytext.optimizer.scheduler.Scheduler(optimizers: List[torch.optim.optimizer.Optimizer], scheduler_params: pytext.optimizer.scheduler.SchedulerParams, lower_is_better: bool = False)[source]

Bases: pytext.config.component.Component

Wrapper for all schedulers.

Wraps one of PyTorch’s epoch-based learning rate schedulers or the metric-based ReduceLROnPlateau. The trainer will need to call the step() method at the end of every epoch, passing the epoch number and validation metrics. Note this differs slightly from PyText, where some schedulers need to be stepped at the beginning of each epoch.

Config

alias of SchedulerParams

step(metrics: float, epoch: Optional[int] = None) → None[source]
step_batch() → None[source]
pytext.optimizer.scheduler.SchedulerParams[source]

alias of pytext.optimizer.scheduler.SchedulerParams

class pytext.optimizer.scheduler.SchedulerType[source]

Bases: enum.Enum

An enumeration.

COSINE_ANNEALING_LR = 'cosine_annealing_lr'
EXPONENTIAL_LR = 'exponential_lr'
LM_FINE_TUNING_LR = 'lm_fine_tuning_lr'
NONE = 'none'
REDUCE_LR_ON_PLATEAU = 'reduce_lr_on_plateau'
STEP_LR = 'step_lr'

Module contents

pytext.optimizer.create_optimizer(model: pytext.models.model.Model, optimizer_params: pytext.config.pytext_config.OptimizerParams) → List[torch.optim.optimizer.Optimizer][source]
pytext.optimizer.learning_rates(optimizers)[source]
pytext.optimizer.optimizer_step(optimizers: List[torch.optim.optimizer.Optimizer]) → None[source]
pytext.optimizer.optimizer_zero_grad(optimizers: List[torch.optim.optimizer.Optimizer]) → None[source]