Contributing to mmm-eval¶
We welcome contributions from the community! This guide will help you get started with contributing to mmm-eval
Getting Started¶
Prerequisites¶
- Python 3.11+ - Required for development
- Git - For version control
- Poetry - For dependency management (recommended)
Development Setup¶
- Fork the repository on GitHub
-
Clone your fork:
-
Set up the development environment:
-
(Optional) Install pre-commit hooks:
Development Workflow¶
1. Create a Feature Branch¶
2. Make Your Changes¶
- Write your code following the coding standards
- Add tests for new functionality
- Update documentation as needed
3. Test Your Changes¶
# Run all tests
tox
# Run specific test categories
pytest tests/unit/
pytest tests/integration/
# Run linting and formatting
black mmm_eval tests
isort mmm_eval tests
ruff check mmm_eval tests
4. Commit Your Changes¶
Where possible, please follow the conventional commits format:
feat:
New featuresfix:
Bug fixesdocs:
Documentation changesstyle:
Code style changesrefactor:
Code refactoringtest:
Test changeschore:
Maintenance tasks
5. Push and Create a Pull Request¶
Then create a pull request on GitHub.
Coding Standards¶
Python Code Style¶
We use several tools to maintain code quality:
- Black - Code formatting
- isort - Import sorting
- Ruff - Linting and additional checks
- Pyright - Type checking
Code Formatting¶
# Format code
black mmm_eval tests
# Sort imports
isort mmm_eval tests
# Run linting
ruff check mmm_eval tests
ruff check --fix mmm_eval tests
Type Hints¶
We use type hints throughout the codebase:
from typing import List, Optional, Dict, Any
def process_data(data: List[Dict[str, Any]]) -> Optional[Dict[str, float]]:
"""Process the input data and return results."""
pass
Docstrings¶
Use Google-style docstrings:
def calculate_mape(actual: List[float], predicted: List[float]) -> float:
"""Calculate Mean Absolute Percentage Error.
Args:
actual: List of actual values
predicted: List of predicted values
Returns:
MAPE value as a float
Raises:
ValueError: If inputs are empty or have different lengths
"""
pass
Testing¶
Running Tests¶
# Run all tests
pytest
# Run with coverage
pytest --cov=mmm_eval
# Run specific test file
pytest tests/test_metrics.py
# Run tests in parallel
pytest -n auto
Writing Tests¶
- Place tests in the
tests/
directory - Use descriptive test names
- Test both success and failure cases
- Use fixtures for common test data
Example test:
import pytest
from mmm_eval.metrics import calculate_mape
def test_calculate_mape_basic():
"""Test basic MAPE calculation."""
actual = [100, 200, 300]
predicted = [110, 190, 310]
mape = calculate_mape(actual, predicted)
assert isinstance(mape, float)
assert mape > 0
def test_calculate_mape_empty_input():
"""Test MAPE calculation with empty input."""
with pytest.raises(ValueError):
calculate_mape([], [])
Documentation¶
Updating Documentation¶
- Update docstrings in the code
- Update markdown files in the
docs/
directory - Build and test documentation:
Documentation Standards¶
- Use clear, concise language
- Include code examples
- Keep documentation up to date with code changes
- Use proper markdown formatting
Pull Request Guidelines¶
Before Submitting¶
- Ensure all tests pass
- Update documentation if needed
- Add tests for new functionality
- Follow coding standards
- Update CHANGELOG.md
Pull Request Template¶
Use the provided pull request template and fill in all sections:
- Description - What does this PR do?
- Type of change - Bug fix, feature, documentation, etc.
- Testing - How was this tested?
Review Process¶
- Automated checks must pass
- Code review by maintainers
- Documentation review if needed
- Merge after approval
Issue Reporting¶
Before Creating an Issue¶
- Search existing issues to avoid duplicates
- Check documentation for solutions
- Update to the latest version of mmm-eval
Issue Template¶
Be sure to include:
- Description - Clear description of the problem
- Steps to reproduce - Detailed steps
- Expected behavior - What should happen
- Actual behavior - What actually happens
- Environment - OS, Python version, mmm-eval version
- Additional context - Any other relevant information
Community Guidelines¶
Code of Conduct¶
We are committed to providing a welcoming and inclusive environment. Please:
- Be respectful and inclusive
- Use welcoming and inclusive language
- Be collaborative and constructive
- Focus on what is best for the community
Communication¶
- GitHub Issues - For bug reports and feature requests
- GitHub Discussions - For questions and general discussion
- Pull Requests - For code contributions
Getting Help¶
If you need help with contributing:
- Check the documentation first
- Search existing issues and discussions
- Create a new discussion for questions
- Join our community channels
Recognition¶
Contributors will be recognized in:
- README.md - For significant contributions
- CHANGELOG.md - For all contributions
- GitHub contributors page
Thank you for contributing to mmm-eval! 🎉