ebes.pipeline package
Submodules
ebes.pipeline.base_runner module
- class ebes.pipeline.base_runner.Runner
Bases:
ABC
- do_n_runs(config, n_runs=3, n_workers=3)
Do n runs with different seed in parralell
- Return type:
DataFrame
- static get_runner(name, *args, **kwargs)
- param_grid(trial, config)
- Return type:
tuple
[Trial
,DictConfig
]
- abstract pipeline(config)
Construct your pipeline.
- Return type:
dict
[str
,float
]
- run(config)
- Return type:
DataFrame
|Study
- run_optuna(config, target_metric='val_metric', request_list=[], n_startup_trials=3, n_trials=50, multivariate=True, group=True)
Set target_metric according to _train_eval(). request_list - list of dicts where {key:value} is {trial_parameter_name:parameter_value} n_startup_trials == n_random_trials n_trials == n_trials to make in total by this function call(doesn’t affect parallel runs). n_runs - better not torch it
ebes.pipeline.utils module
- ebes.pipeline.utils.access_by_name(config, name)
- ebes.pipeline.utils.assign_by_name(config, name, value)
- ebes.pipeline.utils.get_dict_from_trial_params(params)
- ebes.pipeline.utils.get_loss(name, params=None)
- ebes.pipeline.utils.get_metrics(metric_specs=None, device=None)
- Return type:
list
[Metric
]
- ebes.pipeline.utils.get_optimizer(net_params, name='Adam', params=None)
- ebes.pipeline.utils.get_scheduler(optimizer, name, params=None)
- ebes.pipeline.utils.get_unique_folder_suffix(folder_path)
- ebes.pipeline.utils.optuna_df(path='log/test', name=None)
- Return type:
tuple
[DataFrame
,Study
]
- ebes.pipeline.utils.parse_n_runs(result_list)
- Return type:
DataFrame
- ebes.pipeline.utils.set_start_method(logger)
- ebes.pipeline.utils.set_start_method_as_fork(logger)
- ebes.pipeline.utils.suggest_conf(suggestions, config, trial)