ebes package
Subpackages
- ebes.data package
- Submodules
- ebes.data.accessors module
- ebes.data.batch_tfs module
- ebes.data.datasets module
- ebes.data.loading module
- ebes.data.utils module
- Module contents
- ebes.losses package
- ebes.metrics package
- ebes.model package
- ebes.pipeline package
- ebes.utils package
Submodules
ebes.trainer module
- class ebes.trainer.Trainer(*, model=None, loss=None, optimizer=None, lr_scheduler=None, train_loader=None, val_loader=None, metrics=None, run_name=None, total_iters=None, total_epochs=None, patience=-1, iters_per_epoch=10000, ckpt_dir=None, ckpt_replace=True, ckpt_track_metric='epoch', ckpt_resume=None, device='cpu', metrics_on_train=False, verbose=True)
Bases:
object
A base class for all trainers.
- best_checkpoint()
Return the path to the best checkpoint
- Return type:
Path
- compute_metrics(phase)
Compute and log metrics.
The metrics are computed based on the whole epoch data, so the granularity of metrics is epoch, so when the metrics are not None, the epoch is not None to.
- Parameters:
phase (
Literal
['train'
,'val'
]) – wether the metrics were collected during train or validatoin.- Return type:
dict
[str
,Any
]
- property device: str
- load_best_model()
Loads the best model to self._model according to the track metric.
- Return type:
None
- load_ckpt(ckpt_fname, strict=True)
Load model, optimizer and scheduler states.
- Parameters:
ckpt_fname (
str
|PathLike
) – path to checkpoint.- Return type:
None
- property lr_scheduler: _LRScheduler | None
- property model: Module | None
- property optimizer: Optimizer | None
- run()
Train and validate model.
- Return type:
None
- property run_name
- save_ckpt(ckpt_path=None)
Save model, optimizer and scheduler states.
- Parameters:
ckpt_path (
Union
[str
,PathLike
,None
]) – path to checkpoints. If ckpt_path is a directory, the checkpoint will be saved there with epoch, loss an metrics in the filename. All scalar metrics returned from compute_metrics are used to construct a filename. If full path is specified, the checkpoint will be saved exectly there. If None ckpt_dir from construct is used with subfolder named run_name from Trainer’s constructor.- Return type:
None
- train(iters)
- Return type:
dict
[str
,Any
]
- validate(loader=None)
- Return type:
dict
[str
,Any
]
ebes.types module
- class ebes.types.Batch(*, lengths, time, index=None, num_features=None, cat_features=None, target=None, cat_features_names=None, num_features_names=None, cat_mask=None, num_mask=None)
Bases:
object
-
cat_features:
Tensor
|None
= None
-
cat_features_names:
list
[str
] |None
= None
-
cat_mask:
Tensor
|None
= None
-
index:
Tensor
|ndarray
|None
= None
-
lengths:
Tensor
-
num_features:
Tensor
|None
= None
-
num_features_names:
list
[str
] |None
= None
-
num_mask:
Tensor
|None
= None
- pop_target()
- Return type:
Tensor
|None
-
target:
Tensor
|None
= None
-
time:
ndarray
|Tensor
- to(device)
-
cat_features:
- class ebes.types.NHReturn(pre_event_intensities_of_gt, non_event_intensity, clustering_loss, lengths, clus_labels, pred_labels)
Bases:
object
-
clus_labels:
Tensor
-
clustering_loss:
Tensor
-
lengths:
Tensor
-
non_event_intensity:
Tensor
-
pre_event_intensities_of_gt:
Tensor
-
pred_labels:
Tensor
- to(device)
-
clus_labels:
- class ebes.types.NHSeq(*, tokens, lengths, time, masks=None, clustering_loss, clus_labels)
Bases:
Seq
-
clus_labels:
Tensor
-
clustering_loss:
Tensor
-
clus_labels: