modalities.utils.benchmarking package
Submodules
modalities.utils.benchmarking.benchmarking_utils module
- class modalities.utils.benchmarking.benchmarking_utils.FileNames(value)[source]
Bases:
Enum- ERRORS_FILE_REGEX = 'error_logs_*.log'
- RESULTS_FILE = 'evaluation_results.jsonl'
- class modalities.utils.benchmarking.benchmarking_utils.SweepSets(value)[source]
Bases:
Enum- ALL_CONFIGS = 'all_configs'
- MOST_RECENT_CONFIGS = 'most_recent_configs'
- REMAINING_CONFIGS = 'remaining_configs'
- UPDATED_CONFIGS = 'updated_configs'
- modalities.utils.benchmarking.benchmarking_utils.get_current_sweep_status(exp_root, expected_steps, world_size=None, skip_exception_types=None)[source]
Get the status of the sweep by assigning the config file paths to categories ‘all’, ‘most_recent’, and ‘remaining’.
- Return type:
- Parameters:
- Args:
exp_root (Path): The root directory of the experiment. expected_steps (int): The expected number of steps in the evaluation results. world_size (Optional[int]): The number of ranks (world size) to filter the configs for. skip_exception_types (Optional[list[str]]): List of exception types to skip when checking if
an experiment is done. A skipped experiment is considered as done in this case.
- Returns:
- dict[str, list[Path]]: A dictionary with keys ‘all_configs’, ‘most_recent_configs’, and ‘remaining_configs’,
each containing a list of Path objects pointing to the respective config files.
- modalities.utils.benchmarking.benchmarking_utils.get_updated_sweep_status(exp_root, expected_steps, skip_exception_types, world_size=None, create_new_folders_if_partially_done=True)[source]
List all remaining runs in the experiment root directory and optionally write them to a file.
- Return type:
- Parameters:
- Args:
exp_root (Path): The root directory of the experiment. expected_steps (int): The expected number of steps in the evaluation results. file_list_path (Optional[Path]): If provided, the list of remaining runs will be written to this file. skip_exception_types (Optional[list[str]]): List of exception types to skip when
checking if an experiment is done. A skipped experiment is considered as done in this case.
world_size (Optional[int]): The number of ranks (world size) to filter the configs for. create_new_folders_if_partially_done (bool): If True, create new experiment folders for remaining configs.
modalities.utils.benchmarking.sweep_utils module
- class modalities.utils.benchmarking.sweep_utils.SweepConfig(**data)[source]
Bases:
BaseModelCreate a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class modalities.utils.benchmarking.sweep_utils.SweepGenerator(sweep_config, output_dir)[source]
Bases:
objectInitialize the SweepGenerator with cluster configuration, script configuration, sweep configuration, and script template.
- Parameters:
sweep_config (SweepConfig)
output_dir (Path)