dgs.utils.torchtools.resume_from_checkpoint¶
- dgs.utils.torchtools.resume_from_checkpoint(fpath: str, model: TorchMod | BaseMod, optimizer: torch.optim.Optimizer | None = None, scheduler: torch.optim.lr_scheduler.LRScheduler | None = None, verbose: bool = False) int [source]¶
Resumes training from a checkpoint.
This will load (1) model weights and (2)
state_dict
of optimizer ifoptimizer
is not None.- Parameters:
fpath – The path to checkpoint. Can be a local or absolute path.
model – The model that is currently trained.
optimizer – An Optimizer.
scheduler – A single LRScheduler.
verbose – Whether to print additional debug information.
- Returns:
start_epoch.
- Return type:
int
Examples
>>> from dgs.utils.torchtools import resume_from_checkpoint >>> fpath = 'log/my_model/model.pth.tar-10' >>> start_epoch = resume_from_checkpoint( >>> fpath, model, optimizer, scheduler >>> )