ebes.model.mamba package
Submodules
ebes.model.mamba.mamba_es module
- class ebes.model.mamba.mamba_es.MambaModel(d_model=768, n_layer=24, rms_norm=True, residual_in_fp32=True, fused_add_norm=True, rescale_prenorm_residual=True, n_residuals_per_layer=1, device='cuda', dtype=torch.float32)
Bases:
BaseSeq2Seq
- forward(input_ids)
“position_ids” is just to be compatible with Transformer generation. We don’t use it. num_last_tokens: if > 0, only return the logits for the last n tokens input_ids: (B, L, D)
- property output_dim
- class ebes.model.mamba.mamba_es.MixerModel(d_model, n_layer, ssm_cfg=None, norm_epsilon=1e-05, rms_norm=False, initializer_cfg=None, fused_add_norm=False, residual_in_fp32=False, device=None, dtype=None)
Bases:
Module
- forward(input_ids, inference_params=None)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- ebes.model.mamba.mamba_es.create_block(d_model, ssm_cfg=None, norm_epsilon=1e-05, rms_norm=False, residual_in_fp32=False, fused_add_norm=False, layer_idx=None, device=None, dtype=None)