modalities.models.huggingface_adapters package

Submodules

modalities.models.huggingface_adapters.hf_adapter module

class modalities.models.huggingface_adapters.hf_adapter.HFModelAdapter(config, prediction_key, load_checkpoint=False, *inputs, **kwargs)[source]

Bases: PreTrainedModel

HFModelAdapter class for the HuggingFace model adapter.

Initializes the HFAdapter object.

Args:

config (HFModelAdapterConfig): The configuration object for the HFAdapter. prediction_key (str): The key for the prediction. load_checkpoint (bool, optional): Whether to load a checkpoint. Defaults to False. *inputs: Variable length argument list. **kwargs: Arbitrary keyword arguments.

Parameters:
config_class

alias of HFModelAdapterConfig

forward(input_ids, attention_mask=None, return_dict=False, output_attentions=False, output_hidden_states=False)[source]

Forward pass of the HFAdapter module.

Args:

input_ids (torch.Tensor): The input tensor of token indices. attention_mask (torch.Tensor, optional): The attention mask tensor. Defaults to None. return_dict (bool, optional): Whether to return a dictionary as output. Defaults to False. output_attentions (bool, optional): Whether to output attentions. Defaults to False. output_hidden_states (bool, optional): Whether to output hidden states. Defaults to False.

Returns:

ModalitiesModelOutput | torch.Tensor: The output of the forward pass.

Parameters:
  • input_ids (Tensor)

  • attention_mask (Tensor | None)

  • return_dict (bool | None)

  • output_attentions (bool | None)

  • output_hidden_states (bool | None)

prepare_inputs_for_generation(input_ids, attention_mask=None, **kwargs)[source]

Prepares the inputs for generation.

Return type:

dict[str, Any]

Parameters:
  • input_ids (LongTensor)

  • attention_mask (LongTensor)

Args:

input_ids (torch.LongTensor): The input tensor of token IDs. attention_mask (torch.LongTensor, optional): The attention mask tensor. Defaults to None. **kwargs: Additional keyword arguments.

Returns:

dict[str, Any]: A dictionary containing the prepared inputs for generation.

Note:

Implement in subclasses of PreTrainedModel for custom behavior to prepare inputs in the generate method.

class modalities.models.huggingface_adapters.hf_adapter.HFModelAdapterConfig(**kwargs)[source]

Bases: PretrainedConfig

HFModelAdapterConfig configuration class for the HFModelAdapter.

Initializes an HFModelAdapterConfig object.

Args:

**kwargs: Additional keyword arguments.

Raises:

ConfigError: If the config is not passed in HFModelAdapterConfig.

model_type: str = 'modalities'
to_json_string(use_diff=True)[source]

Converts the adapter object configuration to a JSON string representation.

Return type:

str

Parameters:

use_diff (bool)

Args:
use_diff (bool, optional): Whether to include only the differences from the default configuration.

Defaults to True.

Returns:

str: The JSON string representation of the adapter object.

class modalities.models.huggingface_adapters.hf_adapter.ModalitiesModelOutput(logits=None, hidden_states=None, attentions=None)[source]

Bases: ModelOutput

ModalitiesModelOutput class.

Args:

logits (torch.FloatTensor, optional): The logits output of the model. Defaults to None. hidden_states (tuple[torch.FloatTensor], optional): The hidden states output of the model. Defaults to None. attentions (tuple[torch.FloatTensor], optional): The attentions output of the model. Defaults to None.

Parameters:
  • logits (FloatTensor | None)

  • hidden_states (tuple[FloatTensor] | None)

  • attentions (tuple[FloatTensor] | None)

attentions: Optional[tuple[FloatTensor]] = None
hidden_states: Optional[tuple[FloatTensor]] = None
logits: Optional[FloatTensor] = None

Module contents