modalities.models.components package

Submodules

modalities.models.components.layer_norms module

class modalities.models.components.layer_norms.LayerNormConfig(**data)[source]

Bases: BaseModel

Configuration class for Layer Normalization.

Args:

normalized_shape (int): The expected size of the input shape. eps (float, optional): A value added to the denominator for numerical stability. Defaults to 1e-6. elementwise_affine (bool, optional): Whether to include learnable affine parameters. Defaults to True. bias (bool, optional): Whether to include a bias term. Defaults to True.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:
  • normalized_shape (Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Strict(strict=True), Ge(ge=1)])])

  • eps (Annotated[float, FieldInfo(annotation=NoneType, required=False, default=1e-06, metadata=[Strict(strict=True), Gt(gt=0)])])

  • elementwise_affine (Annotated[bool, FieldInfo(annotation=NoneType, required=False, default=True, metadata=[Strict(strict=True)])])

  • bias (Annotated[bool, FieldInfo(annotation=NoneType, required=False, default=True, metadata=[Strict(strict=True)])])

bias: Annotated[bool]
elementwise_affine: Annotated[bool]
eps: Annotated[float]
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

normalized_shape: Annotated[int]
class modalities.models.components.layer_norms.RMSLayerNorm(ndim, bias=True, epsilon=1e-05)[source]

Bases: Module

RMS normalization class.

Initializes a LayerNorm module.

Args:

ndim (int): The number of dimensions of the input tensor. bias (bool, optional): If True, adds a learnable bias to the normalized tensor. Defaults to True. epsilon (float, optional): A small value added to the denominator for numerical stability. Defaults to 1e-5.

Note:

Original paper: https://arxiv.org/pdf/1910.07467.pdf Source code adopted from https://github.com/facebookresearch/llama/blob/a0a4da8b497c566403941ceec47c2512ecf9dd20/llama/model.py#L34C1-L77C36

Returns:

None

Parameters:
forward(x)[source]

Forward pass of the layer normalization module.

Return type:

Tensor

Parameters:

x (Tensor)

Args:

x (torch.Tensor): Input tensor.

Returns:

torch.Tensor: Output tensor after applying layer normalization.

reset_parameters()[source]
class modalities.models.components.layer_norms.RMSLayerNormConfig(**data)[source]

Bases: BaseModel

Configuration class for RMSLayerNorm.

Args:

ndim (int): Number of dimensions for the input tensor. Must be greater than or equal to 1. epsilon (float, optional): Small value added to the input to avoid division by zero. Defaults to 1e-6. bias (bool, optional): Whether to include a bias term. Defaults to True.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:
  • ndim (Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Strict(strict=True), Ge(ge=1)])])

  • epsilon (Annotated[float, FieldInfo(annotation=NoneType, required=False, default=1e-06, metadata=[Gt(gt=0)])])

  • bias (Annotated[bool, FieldInfo(annotation=NoneType, required=False, default=True, metadata=[Strict(strict=True)])])

bias: Annotated[bool]
epsilon: Annotated[float]
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

ndim: Annotated[int]

Module contents