Core API Reference¶
mmm_eval.core
¶
Core validation functionality for MMM frameworks.
Classes¶
BaseValidationTest(date_column: str)
¶
Bases: ABC
Abstract base class for validation tests.
All validation tests must inherit from this class and implement the required methods to provide a unified testing interface.
Initialize the validation test.
Source code in mmm_eval/core/base_validation_test.py
Attributes¶
test_name: str
abstractmethod
property
¶
Return the name of the test.
Returns Test name (e.g., 'accuracy', 'stability')
Functions¶
run(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
abstractmethod
¶
Run the validation test.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
adapter | BaseAdapter | The adapter to validate | required |
data | DataFrame | Input data for validation | required |
Returns:
Type | Description |
---|---|
ValidationTestResult | TestResult object containing test results |
Source code in mmm_eval/core/base_validation_test.py
run_with_error_handling(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
¶
Run the validation test with error handling.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
adapter | BaseAdapter | The adapter to validate | required |
data | DataFrame | Input data for validation | required |
Returns:
Type | Description |
---|---|
ValidationTestResult | TestResult object containing test results |
Raises:
Type | Description |
---|---|
MetricCalculationError | If metric calculation fails |
TestExecutionError | If test execution fails |
Source code in mmm_eval/core/base_validation_test.py
ValidationResults(test_results: dict[ValidationTestNames, ValidationTestResult])
¶
Container for complete validation results.
This class holds the results of all validation tests run, including individual test results and overall summary.
Initialize validation results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
test_results | dict[ValidationTestNames, ValidationTestResult] | Dictionary mapping test names to their results | required |
Source code in mmm_eval/core/validation_test_results.py
ValidationTestOrchestrator()
¶
Main orchestrator for running validation tests.
This class manages the test registry and executes tests in sequence, aggregating their results.
Initialize the validator with standard tests pre-registered.
Source code in mmm_eval/core/validation_test_orchestrator.py
Functions¶
validate(adapter: BaseAdapter, data: pd.DataFrame, test_names: list[ValidationTestNames]) -> ValidationResults
¶
Run validation tests on the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model | Model to validate | required | |
data | DataFrame | Input data for validation | required |
test_names | list[ValidationTestNames] | List of test names to run | required |
adapter | BaseAdapter | Adapter to use for the test | required |
Returns:
Type | Description |
---|---|
ValidationResults | ValidationResults containing all test results |
Raises:
Type | Description |
---|---|
ValueError | If any requested test is not registered |
Source code in mmm_eval/core/validation_test_orchestrator.py
ValidationTestResult(test_name: ValidationTestNames, metric_names: list[str], test_scores: AccuracyMetricResults | CrossValidationMetricResults | RefreshStabilityMetricResults | PerturbationMetricResults)
¶
Container for individual test results.
This class holds the results of a single validation test, including pass/fail status, metrics, and any error messages.
Initialize test results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
test_name | ValidationTestNames | Name of the test | required |
metric_names | list[str] | List of metric names | required |
test_scores | AccuracyMetricResults | CrossValidationMetricResults | RefreshStabilityMetricResults | PerturbationMetricResults | Computed metric results | required |
Source code in mmm_eval/core/validation_test_results.py
Functions¶
to_df() -> pd.DataFrame
¶
Convert test results to a flat DataFrame format.
Source code in mmm_eval/core/validation_test_results.py
Modules¶
base_validation_test
¶
Abstract base classes for MMM validation framework.
Classes¶
BaseValidationTest(date_column: str)
¶
Bases: ABC
Abstract base class for validation tests.
All validation tests must inherit from this class and implement the required methods to provide a unified testing interface.
Initialize the validation test.
Source code in mmm_eval/core/base_validation_test.py
test_name: str
abstractmethod
property
¶Return the name of the test.
Returns Test name (e.g., 'accuracy', 'stability')
run(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
abstractmethod
¶Run the validation test.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
adapter | BaseAdapter | The adapter to validate | required |
data | DataFrame | Input data for validation | required |
Returns:
Type | Description |
---|---|
ValidationTestResult | TestResult object containing test results |
Source code in mmm_eval/core/base_validation_test.py
run_with_error_handling(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
¶Run the validation test with error handling.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
adapter | BaseAdapter | The adapter to validate | required |
data | DataFrame | Input data for validation | required |
Returns:
Type | Description |
---|---|
ValidationTestResult | TestResult object containing test results |
Raises:
Type | Description |
---|---|
MetricCalculationError | If metric calculation fails |
TestExecutionError | If test execution fails |
Source code in mmm_eval/core/base_validation_test.py
Functions¶
split_timeseries_cv(data: pd.DataFrame, n_splits: PositiveInt, test_size: PositiveInt, date_column: str) -> Generator[tuple[np.ndarray, np.ndarray], None, None]
¶
Produce train/test masks for rolling CV, split globally based on date.
This simulates regular refreshes and utilises the last test_size
data points for testing in the first fold, using all prior data for training. For a dataset with T dates, the subsequen test folds follow the pattern [T-4, T], [T-8, T-4], ...
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data | DataFrame | dataframe of MMM data to be split | required |
n_splits | PositiveInt | number of unique folds to generate | required |
test_size | PositiveInt | the number of observations in each testing fold | required |
date_column | str | the name of the date column in the dataframe to split by | required |
Yields:
Type | Description |
---|---|
tuple[ndarray, ndarray] | integer masks corresponding training and test set indices. |
Source code in mmm_eval/core/base_validation_test.py
split_timeseries_data(data: pd.DataFrame, test_proportion: PositiveFloat, date_column: str) -> tuple[np.ndarray, np.ndarray]
¶
Split data globally based on date.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data | DataFrame | timeseries data to split, possibly with another index like geography | required |
test_proportion | PositiveFloat | proportion of test data, must be in (0, 1) | required |
date_column | str | name of the date column | required |
Returns:
Type | Description |
---|---|
tuple[ndarray, ndarray] | boolean masks for training and test data respectively |
Source code in mmm_eval/core/base_validation_test.py
constants
¶
evaluator
¶
Main evaluator for MMM frameworks.
Classes¶
Evaluator(data: pd.DataFrame, test_names: tuple[str, ...] | None = None)
¶
Main evaluator class for MMM frameworks.
This class provides a unified interface for evaluating different MMM frameworks using standardized validation tests.
Initialize the evaluator.
Source code in mmm_eval/core/evaluator.py
evaluate_framework(framework: str, config: BaseConfig) -> ValidationResults
¶Evaluate an MMM framework using the unified API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
framework | str | Name of the MMM framework to evaluate | required |
config | BaseConfig | Framework-specific configuration | required |
Returns:
Type | Description |
---|---|
ValidationResults | ValidationResult object containing evaluation metrics and predictions |
Raises:
Type | Description |
---|---|
ValueError | If any test name is invalid |
Source code in mmm_eval/core/evaluator.py
Functions¶
exceptions
¶
Custom exceptions for MMM validation framework.
Classes¶
InvalidTestNameError
¶
Bases: ValidationError
Raised when an invalid test name is provided.
MetricCalculationError
¶
Bases: ValidationError
Raised when metric calculation fails.
TestExecutionError
¶
Bases: ValidationError
Raised when test execution fails.
ValidationError
¶
Bases: Exception
Base exception for validation framework errors.
run_evaluation
¶
Classes¶
Functions¶
run_evaluation(framework: str, data: pd.DataFrame, config: BaseConfig, test_names: tuple[str, ...] | None = None) -> pd.DataFrame
¶
Evaluate an MMM framework.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
framework | str | The framework to evaluate. | required |
data | DataFrame | The data to evaluate. | required |
config | BaseConfig | The config to use for the evaluation. | required |
test_names | tuple[str, ...] | None | The tests to run. If not provided, all tests will be run. | None |
Returns:
Type | Description |
---|---|
DataFrame | A pandas DataFrame containing the evaluation results. |
Source code in mmm_eval/core/run_evaluation.py
validation_test_orchestrator
¶
Test orchestrator for MMM validation framework.
Classes¶
ValidationTestOrchestrator()
¶
Main orchestrator for running validation tests.
This class manages the test registry and executes tests in sequence, aggregating their results.
Initialize the validator with standard tests pre-registered.
Source code in mmm_eval/core/validation_test_orchestrator.py
validate(adapter: BaseAdapter, data: pd.DataFrame, test_names: list[ValidationTestNames]) -> ValidationResults
¶Run validation tests on the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model | Model to validate | required | |
data | DataFrame | Input data for validation | required |
test_names | list[ValidationTestNames] | List of test names to run | required |
adapter | BaseAdapter | Adapter to use for the test | required |
Returns:
Type | Description |
---|---|
ValidationResults | ValidationResults containing all test results |
Raises:
Type | Description |
---|---|
ValueError | If any requested test is not registered |
Source code in mmm_eval/core/validation_test_orchestrator.py
validation_test_results
¶
Result containers for MMM validation framework.
Classes¶
ValidationResults(test_results: dict[ValidationTestNames, ValidationTestResult])
¶
Container for complete validation results.
This class holds the results of all validation tests run, including individual test results and overall summary.
Initialize validation results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
test_results | dict[ValidationTestNames, ValidationTestResult] | Dictionary mapping test names to their results | required |
Source code in mmm_eval/core/validation_test_results.py
ValidationTestResult(test_name: ValidationTestNames, metric_names: list[str], test_scores: AccuracyMetricResults | CrossValidationMetricResults | RefreshStabilityMetricResults | PerturbationMetricResults)
¶
Container for individual test results.
This class holds the results of a single validation test, including pass/fail status, metrics, and any error messages.
Initialize test results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
test_name | ValidationTestNames | Name of the test | required |
metric_names | list[str] | List of metric names | required |
test_scores | AccuracyMetricResults | CrossValidationMetricResults | RefreshStabilityMetricResults | PerturbationMetricResults | Computed metric results | required |
Source code in mmm_eval/core/validation_test_results.py
to_df() -> pd.DataFrame
¶Convert test results to a flat DataFrame format.
Source code in mmm_eval/core/validation_test_results.py
validation_tests
¶
Classes¶
AccuracyTest(date_column: str)
¶
Bases: BaseValidationTest
Validation test for model accuracy using holdout validation.
This test evaluates model performance by splitting data into train/test sets and calculating MAPE and R-squared metrics on the test set.
Source code in mmm_eval/core/base_validation_test.py
test_name: ValidationTestNames
property
¶Return the name of the test.
run(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
¶Run the accuracy test.
Source code in mmm_eval/core/validation_tests.py
CrossValidationTest(date_column: str)
¶
Bases: BaseValidationTest
Validation test for the cross-validation of the MMM framework.
Source code in mmm_eval/core/base_validation_test.py
test_name: ValidationTestNames
property
¶Return the name of the test.
run(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
¶Run the cross-validation test using time-series splits.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model | Model to validate | required | |
adapter | BaseAdapter | Adapter to use for the test | required |
data | DataFrame | Input data | required |
Returns:
Type | Description |
---|---|
ValidationTestResult | TestResult containing cross-validation metrics |
Source code in mmm_eval/core/validation_tests.py
PerturbationTest(date_column: str)
¶
Bases: BaseValidationTest
Validation test for the perturbation of the MMM framework.
Source code in mmm_eval/core/base_validation_test.py
test_name: ValidationTestNames
property
¶Return the name of the test.
run(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
¶Run the perturbation test.
Source code in mmm_eval/core/validation_tests.py
RefreshStabilityTest(date_column: str)
¶
Bases: BaseValidationTest
Validation test for the stability of the MMM framework.
Source code in mmm_eval/core/base_validation_test.py
test_name: ValidationTestNames
property
¶Return the name of the test.
run(adapter: BaseAdapter, data: pd.DataFrame) -> ValidationTestResult
¶Run the stability test.
Source code in mmm_eval/core/validation_tests.py
Functions¶
validation_tests_models
¶
Classes¶
ValidationResultAttributeNames
¶
Bases: StrEnum
Define the names of the validation result attributes.
ValidationTestAttributeNames
¶
Bases: StrEnum
Define the names of the validation test attributes.