ebes.metrics package
Submodules
ebes.metrics.custom module
- class ebes.metrics.custom.LoggingMetric(key, *_, **__)
Bases:
Metric
- compute()
Implement this method to compute and return the final metric value from state variables.
Decorate compute() with @torch.inference_mode() which gives better performance by disabling view tracking.
- merge_state(metrics)
Implement this method to update the current metric’s state variables to be the merged states of the current metric and input metrics. The state variables of input metrics should stay unchanged.
Decorate merge_state() with @torch.inference_mode() which gives better performance by disabling view tracking.
self.merge_state
might change the size/shape of state variables. Make sureself.update
andself.compute
can still be called without exceptions when state variables are merged.This method can be used as a building block for syncing metric states in distributed training. For example,
sync_and_compute
in the metric toolkit will use this method to merge metric objects gathered from the process group.
- update(pred, _)
Implement this method to update the state variables of your metric class.
Decorate update() with @torch.inference_mode() which gives better performance by disabling view tracking.
- class ebes.metrics.custom.MLEM_reconstruction_loss(*_, **__)
Bases:
LoggingMetric
- class ebes.metrics.custom.MLEM_sparcity_loss(*_, **__)
Bases:
LoggingMetric
- class ebes.metrics.custom.MLEM_total_CE_loss(*_, **__)
Bases:
LoggingMetric
- class ebes.metrics.custom.MLEM_total_mse_loss(*_, **__)
Bases:
LoggingMetric
- class ebes.metrics.custom.MultiLabelMeanAUROC(*, num_tasks=1, device=None, use_fbgemm=False)
Bases:
BinaryAUROC
- compute()
Return AUROC. If no
update()
calls are made beforecompute()
is called, return an empty tensor.- Returns:
The return value of AUROC for each task (num_tasks,).
- Return type:
Tensor
- update(inp, target, weight=None)
Update states with the ground truth labels and predictions.
- Parameters:
input (Tensor) – Tensor of label predictions It should be predicted label, probabilities or logits with shape of (num_tasks, n_sample) or (n_sample, ).
target (Tensor) – Tensor of ground truth labels with shape of (num_tasks, n_sample) or (n_sample, ).
weight (Tensor) – Optional. A manual rescaling weight to match input tensor shape (num_tasks, num_samples) or (n_sample, ).
- class ebes.metrics.custom.NegRootMeanSquaredError(*, multioutput='uniform_average', device=None)
Bases:
MeanSquaredError
- compute()
Return the Mean Squared Error.
NaN is returned if no calls to
update()
are made beforecompute()
is called.- Return type:
Tensor
- update(pred, target)
Update states with the ground truth values and predictions.
- Parameters:
input (Tensor) – Tensor of predicted values with shape of (n_sample, n_output).
target (Tensor) – Tensor of ground truth values with shape of (n_sample, n_output).
sample_weight (Optional) – Tensor of sample weights with shape of (n_sample, ). Defaults to None.
- class ebes.metrics.custom.PrimeNetAccuracy(*_, **__)
Bases:
Metric
- compute()
Implement this method to compute and return the final metric value from state variables.
Decorate compute() with @torch.inference_mode() which gives better performance by disabling view tracking.
- merge_state(metrics)
Implement this method to update the current metric’s state variables to be the merged states of the current metric and input metrics. The state variables of input metrics should stay unchanged.
Decorate merge_state() with @torch.inference_mode() which gives better performance by disabling view tracking.
self.merge_state
might change the size/shape of state variables. Make sureself.update
andself.compute
can still be called without exceptions when state variables are merged.This method can be used as a building block for syncing metric states in distributed training. For example,
sync_and_compute
in the metric toolkit will use this method to merge metric objects gathered from the process group.
- update(pred, _)
Implement this method to update the state variables of your metric class.
Decorate update() with @torch.inference_mode() which gives better performance by disabling view tracking.
ebes.metrics.neural_hawkes module
- class ebes.metrics.neural_hawkes.NHEventLogIntensity(*, device=None)
Bases:
Mean
- update(pred, _)
Compute weighted mean. When weight is not provided, it calculates the unweighted mean.
weighted_mean = sum(weight * input) / sum(weight)
- Parameters:
input (Tensor) – Tensor of input values.
weight (optional) – Float or Int or Tensor of input weights. It is default to 1.0. If weight is a Tensor, its size should match the input tensor size.
- Raises:
ValueError – If value of weight is neither a
float
nor aint'' nor a ``torch.Tensor
that matches the input tensor size.
- class ebes.metrics.neural_hawkes.NHEventTypeAccuracy(*_, **__)
Bases:
Metric
- compute()
Implement this method to compute and return the final metric value from state variables.
Decorate compute() with @torch.inference_mode() which gives better performance by disabling view tracking.
- merge_state(metrics)
Implement this method to update the current metric’s state variables to be the merged states of the current metric and input metrics. The state variables of input metrics should stay unchanged.
Decorate merge_state() with @torch.inference_mode() which gives better performance by disabling view tracking.
self.merge_state
might change the size/shape of state variables. Make sureself.update
andself.compute
can still be called without exceptions when state variables are merged.This method can be used as a building block for syncing metric states in distributed training. For example,
sync_and_compute
in the metric toolkit will use this method to merge metric objects gathered from the process group.
- update(pred, _)
Implement this method to update the state variables of your metric class.
Decorate update() with @torch.inference_mode() which gives better performance by disabling view tracking.
- class ebes.metrics.neural_hawkes.NHLL(*, device=None)
Bases:
Mean
- update(pred, _)
Compute weighted mean. When weight is not provided, it calculates the unweighted mean.
weighted_mean = sum(weight * input) / sum(weight)
- Parameters:
input (Tensor) – Tensor of input values.
weight (optional) – Float or Int or Tensor of input weights. It is default to 1.0. If weight is a Tensor, its size should match the input tensor size.
- Raises:
ValueError – If value of weight is neither a
float
nor aint'' nor a ``torch.Tensor
that matches the input tensor size.
- class ebes.metrics.neural_hawkes.NHNegNonEventIntensity(*, device=None)
Bases:
Mean
- update(pred, _)
Compute weighted mean. When weight is not provided, it calculates the unweighted mean.
weighted_mean = sum(weight * input) / sum(weight)
- Parameters:
input (Tensor) – Tensor of input values.
weight (optional) – Float or Int or Tensor of input weights. It is default to 1.0. If weight is a Tensor, its size should match the input tensor size.
- Raises:
ValueError – If value of weight is neither a
float
nor aint'' nor a ``torch.Tensor
that matches the input tensor size.