Metrics Reference¶
mmm_eval.metrics ¶
Accuracy metrics for MMM evaluation.
Functions¶
calculate_absolute_percentage_change(baseline_series: pd.Series, comparison_series: pd.Series) -> pd.Series ¶
Calculate the absolute percentage change between two series.
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_mean_for_singular_values_across_cross_validation_folds(fold_metrics: list[AccuracyMetricResults], metric_name: AccuracyMetricNames) -> float ¶
Calculate the mean of the fold metrics for single values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fold_metrics | list[AccuracyMetricResults] | List of metric result objects | required |
metric_name | AccuracyMetricNames | Name of the metric attribute | required |
Returns:
| Type | Description |
|---|---|
float | Mean value as float |
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_means_for_series_across_cross_validation_folds(folds_of_series: list[pd.Series]) -> pd.Series ¶
Calculate the mean of pandas Series across folds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
folds_of_series | list[Series] | List of pandas Series (e.g., ROI series from different folds) | required |
Returns:
| Type | Description |
|---|---|
Series | Mean Series with same index as input series |
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_smape(actual: pd.Series, predicted: pd.Series) -> float ¶
Calculate Symmetric Mean Absolute Percentage Error (SMAPE).
SMAPE is calculated as: 100 * (2 * |actual - predicted|) / (|actual| + |predicted|)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
actual | Series | Actual values | required |
predicted | Series | Predicted values | required |
Returns:
| Type | Description |
|---|---|
float | SMAPE value as float (percentage) |
Raises:
| Type | Description |
|---|---|
ValueError | If series are empty or have different lengths |
Source code in mmm_eval/metrics/metric_models.py
calculate_std_for_singular_values_across_cross_validation_folds(fold_metrics: list[AccuracyMetricResults], metric_name: AccuracyMetricNames) -> float ¶
Calculate the standard deviation of the fold metrics for single values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fold_metrics | list[AccuracyMetricResults] | List of metric result objects | required |
metric_name | AccuracyMetricNames | Name of the metric attribute | required |
Returns:
| Type | Description |
|---|---|
float | Standard deviation value as float |
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_stds_for_series_across_cross_validation_folds(folds_of_series: list[pd.Series]) -> pd.Series ¶
Calculate the standard deviation of pandas Series across folds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
folds_of_series | list[Series] | List of pandas Series (e.g., ROI series from different folds) | required |
Returns:
| Type | Description |
|---|---|
Series | Standard deviation Series with same index as input series |
Source code in mmm_eval/metrics/accuracy_functions.py
Modules¶
accuracy_functions ¶
Accuracy metrics for MMM evaluation.
Classes¶
Functions¶
calculate_absolute_percentage_change(baseline_series: pd.Series, comparison_series: pd.Series) -> pd.Series ¶
Calculate the absolute percentage change between two series.
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_mean_for_singular_values_across_cross_validation_folds(fold_metrics: list[AccuracyMetricResults], metric_name: AccuracyMetricNames) -> float ¶
Calculate the mean of the fold metrics for single values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fold_metrics | list[AccuracyMetricResults] | List of metric result objects | required |
metric_name | AccuracyMetricNames | Name of the metric attribute | required |
Returns:
| Type | Description |
|---|---|
float | Mean value as float |
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_means_for_series_across_cross_validation_folds(folds_of_series: list[pd.Series]) -> pd.Series ¶
Calculate the mean of pandas Series across folds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
folds_of_series | list[Series] | List of pandas Series (e.g., ROI series from different folds) | required |
Returns:
| Type | Description |
|---|---|
Series | Mean Series with same index as input series |
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_std_for_singular_values_across_cross_validation_folds(fold_metrics: list[AccuracyMetricResults], metric_name: AccuracyMetricNames) -> float ¶
Calculate the standard deviation of the fold metrics for single values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fold_metrics | list[AccuracyMetricResults] | List of metric result objects | required |
metric_name | AccuracyMetricNames | Name of the metric attribute | required |
Returns:
| Type | Description |
|---|---|
float | Standard deviation value as float |
Source code in mmm_eval/metrics/accuracy_functions.py
calculate_stds_for_series_across_cross_validation_folds(folds_of_series: list[pd.Series]) -> pd.Series ¶
Calculate the standard deviation of pandas Series across folds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
folds_of_series | list[Series] | List of pandas Series (e.g., ROI series from different folds) | required |
Returns:
| Type | Description |
|---|---|
Series | Standard deviation Series with same index as input series |
Source code in mmm_eval/metrics/accuracy_functions.py
exceptions ¶
metric_models ¶
Classes¶
AccuracyMetricNames ¶
AccuracyMetricResults ¶
Bases: MetricResults
Define the results of the accuracy metrics.
populate_object_with_metrics(actual: pd.Series, predicted: pd.Series) -> AccuracyMetricResults classmethod ¶Populate the object with the calculated metrics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
actual | Series | The actual values | required |
predicted | Series | The predicted values | required |
Returns:
| Type | Description |
|---|---|
AccuracyMetricResults | AccuracyMetricResults object with the metrics |
Source code in mmm_eval/metrics/metric_models.py
to_df() -> pd.DataFrame ¶Convert the accuracy metric results to a long DataFrame format.
Source code in mmm_eval/metrics/metric_models.py
CrossValidationMetricNames ¶
CrossValidationMetricResults ¶
Bases: MetricResults
Define the results of the cross-validation metrics.
to_df() -> pd.DataFrame ¶Convert the cross-validation metric results to a long DataFrame format.
Source code in mmm_eval/metrics/metric_models.py
MetricNamesBase ¶
Bases: Enum
Base class for metric name enums.
MetricResults ¶
Bases: BaseModel
Define the results of the metrics.
add_pass_fail_column(df: pd.DataFrame) -> pd.DataFrame ¶Add a pass/fail column to the DataFrame based on metric thresholds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df | DataFrame | DataFrame with general_metric_name and metric_value columns | required |
Returns:
| Type | Description |
|---|---|
DataFrame | DataFrame with additional metric_pass column |
Source code in mmm_eval/metrics/metric_models.py
to_df() -> pd.DataFrame ¶Convert the class of test results to a flat DataFrame format.
PerturbationMetricNames ¶
PerturbationMetricResults ¶
Bases: MetricResults
Define the results of the perturbation metrics.
to_df() -> pd.DataFrame ¶Convert the perturbation metric results to a long DataFrame format.
Source code in mmm_eval/metrics/metric_models.py
PlaceboMetricNames ¶
PlaceboMetricResults ¶
Bases: MetricResults
Define the results of the placebo test metrics.
to_df() -> pd.DataFrame ¶Convert the placebo test metric results to a long DataFrame format.
Source code in mmm_eval/metrics/metric_models.py
RefreshStabilityMetricNames ¶
RefreshStabilityMetricResults ¶
Bases: MetricResults
Define the results of the refresh stability metrics.
to_df() -> pd.DataFrame ¶Convert the refresh stability metric results to a long DataFrame format.
Source code in mmm_eval/metrics/metric_models.py
TestResultDFAttributes ¶
Functions¶
calculate_smape(actual: pd.Series, predicted: pd.Series) -> float ¶
Calculate Symmetric Mean Absolute Percentage Error (SMAPE).
SMAPE is calculated as: 100 * (2 * |actual - predicted|) / (|actual| + |predicted|)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
actual | Series | Actual values | required |
predicted | Series | Predicted values | required |
Returns:
| Type | Description |
|---|---|
float | SMAPE value as float (percentage) |
Raises:
| Type | Description |
|---|---|
ValueError | If series are empty or have different lengths |
Source code in mmm_eval/metrics/metric_models.py
threshold_constants ¶
Classes¶
AccuracyThresholdConstants ¶
Constants for the accuracy threshold.
CrossValidationThresholdConstants ¶
Constants for the cross-validation threshold.
PerturbationThresholdConstants ¶
Constants for the perturbation threshold.
PlaceboThresholdConstants ¶
Constants for the placebo test threshold.
RefreshStabilityThresholdConstants ¶
Constants for the refresh stability threshold.