ebes.model package

Subpackages

Submodules

ebes.model.agg module

Sequence to vector heads

class ebes.model.agg.AllHiddenMean(*args, **kwargs)

Bases: BaseAgg

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ebes.model.agg.BaseAgg(*args, **kwargs)

Bases: BaseModel, ABC

abstract forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ebes.model.agg.TakeLastHidden(*args, **kwargs)

Bases: BaseAgg

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ebes.model.agg.ToTensor(*args, **kwargs)

Bases: BaseAgg

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ebes.model.agg.ValidHiddenMean(*args, **kwargs)

Bases: BaseAgg

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ebes.model.basemodel module

Skeleton model with general structure

class ebes.model.basemodel.BaseModel(*args, **kwargs)

Bases: Module

static get_model(name, *args, **kwargs)

ebes.model.mtand module

class ebes.model.mtand.MTAND(input_dim, nhidden=16, embed_time=16, num_heads=1)

Bases: BaseModel

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Seq

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

learn_time_embedding(tt)
property output_dim
class ebes.model.mtand.MultiTimeAttention(input_dim, nhidden=16, embed_time=16, num_heads=1)

Bases: Module

attention(query, key, value, mask=None, dropout=None)

Compute ‘Scaled Dot Product Attention’

forward(query, key, value, mask=None, dropout=None)

Compute ‘Scaled Dot Product Attention’

ebes.model.preprocess module

Preprocessing model.

class ebes.model.preprocess.Batch2Seq(cat_cardinalities, num_count=None, num_features=None, cat_emb_dim=None, num_emb_dim=None, time_process='none', num_norm=False)

Bases: BaseModel

forward(batch, copy=True)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Seq

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property output_dim
class ebes.model.preprocess.SeqBatchNorm(num_count)

Bases: Module

forward(x, lengths)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ebes.model.seq2seq module

Collection of Seq-2-Seq models

class ebes.model.seq2seq.BaseSeq2Seq(*args, **kwargs)

Bases: BaseModel, ABC

abstract forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Seq

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ebes.model.seq2seq.GRU(input_size, hidden_size, num_layers=1, bias=True, dropout=0.0, bidirectional=False, initial_hidden='static')

Bases: BaseSeq2Seq

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Seq

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property output_dim
class ebes.model.seq2seq.PositionalEncoding(d_model, dropout=0.1, max_len=5000, enc_type='base')

Bases: Module

https://github.com/pytorch/examples/blob/main/word_language_model/model.py

Parameters:
  • d_model – the embed dim (required).

  • dropout – the dropout value (default=0.1).

  • max_len – the max. length of the incoming sequence (default=5000).

Examples

>>> pos_encoder = PositionalEncoding(d_model)
forward(x)

Inputs of forward function :type x: :param x: the sequence fed to the positional encoder model (required).

Shape:

x: [sequence length, batch size, embed dim] output: [sequence length, batch size, embed dim]

Examples

>>> output = pos_encoder(x)
get_pe(max_len, d_model)
class ebes.model.seq2seq.Projection(in_features, out_features, bias=True)

Bases: BaseSeq2Seq

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Seq

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property output_dim
class ebes.model.seq2seq.Transformer(input_size, max_len, num_layers=1, num_heads=1, scale_hidden=4, dropout=0.0, pos_dropout=0.0, pos_enc_type='base')

Bases: BaseSeq2Seq

forward(seq)

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Seq

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property output_dim

ebes.model.utils module

class ebes.model.utils.FrozenModel(model)

Bases: Module

forward(*args, **kwargs)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

train(mode=True)

Set the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters:

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

Returns:

self

Return type:

Module

ebes.model.utils.build_model(model_conf)
Return type:

Module

Module contents