Member-only story
Introduction
In recent years, machine learning has become a powerful tool for solving a variety of complex problems across many different industries. However, machine learning models can be difficult to understand and debug, due to their complex, non-linear structure.
Machine learning observability is a set of techniques for understanding and debugging machine learning models. Machine learning observability is important for understanding how a machine learning system is performing.
The current Python ecosystem of tools and frameworks for machine learning observability is in growth mode. Many open-source projects and start-ups are popping up in this space, and this article will review some of the current tooling available for ML observability in Python (in no specific order).
Deepchecks
Deepchecks is the tool for testing and validating your machine learning models and data, and it enables doing so with minimal effort. Deepchecks accompanies you through various validation and testing needs such as verifying your data’s integrity, inspecting its distributions, validating data splits, evaluating your model and comparing between different models.