mllint — Linter for Machine Learning projects

I’m looking for help evaluating the efficacy of mllint! If you have used mllint, please consider filling in my 15-minute survey, as it is extremely important for my MSc thesis!

mllint is a command-line utility to evaluate the technical quality of Machine Learning (ML) and Artificial Intelligence (AI) projects written in Python by analysing the project’s source code, data and configuration of supporting tools.

mllint aims to …

  • … help data scientists and ML engineers in creating and maintaining production-grade ML and AI projects, both on their own personal computers as well as on CI.
  • … help ML practitioners inexperienced with Software Engineering (SE) techniques explore and make effective use of battle-hardended SE for ML tools in the Python ecosystem.
  • … help ML project managers assess the quality of their ML and AI projects and receive recommendations on what aspects of their projects they should focus on improving.

mllint does this by measuring the project’s adherence to ML best practices, as collected and deduced from SE4ML and Google’s Rules for ML. Note that these best practices are rather high-level, while mllint aims to give practical, down-to-earth advice to its users. mllint may therefore be somewhat opinionated, as it tries to advocate specific tools to best fit these best practices. However, mllint aims to only recommend open-source tooling and publically verifiable practices. Feedback is of course always welcome!

mllint is created during my MSc thesis in Computer Science at the Software Engineering Research Group (SERG) at TU Delft and ING’s AI for FinTech Research Lab on the topic of Code Quality and Software Engineering for Machine Learning projects

See also the mllint-example-projects repository to explore the reports of an example project using mllint to measure and improve its project quality over several iterations.

See this to view the report generated for the example project in the demo below.

Example run of mllint


mllint is compiled for Linux, MacOS and Windows and is published to PyPI, so it can be installed using pip install -U mllint Alternatively, use one of the Docker containers at bvobart/mllint

2 min · Bart van Oort (bvobart)


mllint is compiled for Linux, MacOS and Windows and is published to PyPI, so it can be installed using pip install -U mllint Alternatively, use one of the Docker containers at bvobart/mllint

3 min · Bart van Oort (bvobart)


mllint can be configured either using a .mllint.yml file or through the project’s pyproject.toml. This allows you to: selectively disable specific linting rules or categories using their slug define custom linting rules configure specific settings for various linting rules. See the code snippets and commands provided below for examples of such configuration files. Commands To print mllint’s current configuration in YAML format, run (optionally providing the path to the project’s folder):...

1 min · Bart van Oort (bvobart)

Category — Code Quality

This category assesses your project’s code quality by running several static analysis tools on your project. Static analysis tools analyse your code without actually running it, in an attempt to find potential bugs, refactoring opportunities and/or coding style violations. The linter for this category will check whether your project is using the configured set of code quality linters. mllint supports (and by default requires) the following linters: pylint mypy black isort bandit For your project to be considered to be using a linter…...

1 min · Bart van Oort (bvobart)

Category — Continuous Integration

This category evaluates checks whether your project uses Continuous Integration (CI) and how you are using it. Continuous Integration is the practice of automating the integration (merging) of all changes that multiple developers make to a software project. This is done by running an automated process for every commit to your project’s Git repository. This process then downloads your project’s source code at that commit, builds it, runs the linters configured for the project—we hope you include mllint—and runs the project’s tests against the system....

2 min · Bart van Oort (bvobart)

Category — Custom Rules

This category enables you to write your own custom evaluation rules for mllint. Custom rules can be useful for enforcing team, company or organisational practices, as well as implementing checks and analyses for how your proprietary / closed-source tools are being used. Custom rules may also be useful for creating ‘plugins’ to mllint, that implement checks on tools that mllint does not yet have built-in rules for. mllint will pick up these custom rules from your configuration and automatically run their checks during its analysis....

4 min · Bart van Oort (bvobart)

Category — Data Quality

This category assesses your project’s data quality. It is not implemented yet. The idea is that this will contain rules on whether you have proper cleaning scripts and may also include dynamic checks on the data that is currently in the repository (e.g. is it complete (not missing values), are types of each value consistent, that sorta stuff. Perhaps with data-linter or tensorflow-data-validation)

1 min · Bart van Oort (bvobart)

Category — Dependency Management

This category deals with how your project manages its dependencies: the Python packages that your project uses to make it work, such as scikit-learn, pandas, tensorflow and pytorch. Proper dependency management, i.e., properly specifying which packages your project uses and which exact versions of those packages are being used, is important for being able to recreate the environment that your project was developed in. This allows other developers, automated deployment systems, or even just yourself, to install exactly those Python packages that you had installed while developing your project....

2 min · Bart van Oort (bvobart)

Category — Deployment

This category evaluates your project’s ability to be deployed in the real world. It is not yet implemented, but may contain rules about Dockerfiles and configurability, among others. Recommendations: SeldonCore - An open source platform to deploy your machine learning models on Kubernetes at massive scale. Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more....

1 min · Bart van Oort (bvobart)

Category — File Structure

This category deals with the file and folder structure of your ML project. It is not implemented yet. Examples of rules you might see here in the future: Project keeps its data in the ‘./data’ folder Project maintains documentation in a ‘./docs’ folder. Project’s source code is kept in a ‘./src’ folder, or a folder with the same name as the project / package.

1 min · Bart van Oort (bvobart)