How to Log Your Data with MLflow

MLflow, MLOps, Data ScienceMastering data logging in MLOps for your AI workflowPhoto by Chris Liverani on UnsplashPrefaceData is one of the most critical components of the machine learning process. In fact, the quality of the data used in training a model often determines the success or failure of the entire project. While algorithms and models are important, they are powerless without data that is accurate, clean, and representative of the problem you’re trying to solve. Whether you are dealing with structured data, unstructured data, or large-scale datasets, the preparation and understanding of data lay the foundation for any machine learning system. A well-curated dataset can provide the necessary signals for a model to learn effectively, while poor or biased data can lead to incorrect predictions, overfitting, or even harmful societal impacts when models are deployed in real-world applications.With the increasing complexity of machine learning workflows, ensuring the reproducibility and traceability of experiments has become a key concern. This is where MLOps (Machine Learning Operations) comes into play. MLOps is the practice of bringing together data science and operations to automate and streamline machine learning workflows. One critical aspect of MLOps is the tracking of datasets throughout the entire lifecycle of a machine learning project. Tracking is not just about capturing the parameters and metrics of models; it’s equally important to log the datasets used during each phase of the process. This ensures that, down the line, when models are re-evaluated or retrained, the same datasets can be referenced, tested, or reused. It allows for better comparison of results, understanding of changes, and most importantly, ensuring that the results are reproducible by others. As a best practice for modern ML systems, I recommend that all ML practitioners worldwide log their data during training.Data Yesterday vs. Data Today?In this context, MLflow plays a pivotal role by offering a suite of tools that help streamline these MLOps practices. One of the key components of MLflow is its mlflow.data module, which provides the ability to log datasets as part of machine learning experiments. The mlflow.data module ensures that datasets are properly documented, their metadata is tracked, and they can be retrieved and reused in future runs. This helps prevent common problems like "data drift" where models start to perform worse because of subtle changes in the underlying data. MLflow’s ability to track datasets alongside models and parameters ensures that any new experiment can reliably compare results against previous runs using the exact same data, providing transparency and clarity.Photo by Brian McGowan on UnsplashOverviewThis article will guide you through the best practices of logging a dataset in MLflow using the California Housing Dataset as an example. We’ll explore mlflow.data interfaces and demonstrate how you can track and manage your dataset during an ML experiment.Imagine you are a data scientist working on a project to predict housing prices in California based on various features such as median income, population, and location. You’ve spent hours curating a dataset from multiple sources, cleaning it, and ensuring it’s ready for training. Now you’re ready to run your machine learning experiments. Logging your dataset at this stage is crucial because it serves as a snapshot of the exact data used in this specific training run. Should you need to revisit the experiment months later — perhaps to improve the model, tune hyperparameters, or audit the results — you want to ensure that you are using the same dataset to maintain consistency and comparability. Without logging your dataset, it’s easy to lose track of which version of the data was used, especially if the data is updated or changed over time.Without further ado, let’s get started!Set UpSetting up an MLflow server locally is straightforward. Use the following command:mlflow server --host 127.0.0.1 --port 8080Then set the tracking URI.mlflow.set_tracking_uri("http://127.0.0.1:8080")For more advanced configurations, refer to the MLflow documentation.The California housing datasetPhoto by Robert Bye on UnsplashFor this article, we are using the California housing dataset (CC BY license). However, you can apply the same principles to log and track any dataset of your choice.For more information on the California housing dataset, refer to this doc.Dataset and DatasetSourcemlflow.data.dataset.DatasetBefore diving into dataset logging, evaluation, and retrieval, it’s important to understand the concept of datasets in MLflow. MLflow provides the mlflow.data.dataset.Dataset object, which represents datasets used in with MLflow Tracking.class mlflow.data.dataset.Dataset(source: mlflow.data.dataset_source.DatasetSource, name: Optional[str] = None, digest: Optional[str] = None)This object comes with key properties:A required parameter, source (the data sourc

Jan 19, 2025 - 15:55
How to Log Your Data with MLflow

MLflow, MLOps, Data Science

Mastering data logging in MLOps for your AI workflow

Photo by Chris Liverani on Unsplash

Preface

Data is one of the most critical components of the machine learning process. In fact, the quality of the data used in training a model often determines the success or failure of the entire project. While algorithms and models are important, they are powerless without data that is accurate, clean, and representative of the problem you’re trying to solve. Whether you are dealing with structured data, unstructured data, or large-scale datasets, the preparation and understanding of data lay the foundation for any machine learning system. A well-curated dataset can provide the necessary signals for a model to learn effectively, while poor or biased data can lead to incorrect predictions, overfitting, or even harmful societal impacts when models are deployed in real-world applications.

With the increasing complexity of machine learning workflows, ensuring the reproducibility and traceability of experiments has become a key concern. This is where MLOps (Machine Learning Operations) comes into play. MLOps is the practice of bringing together data science and operations to automate and streamline machine learning workflows. One critical aspect of MLOps is the tracking of datasets throughout the entire lifecycle of a machine learning project. Tracking is not just about capturing the parameters and metrics of models; it’s equally important to log the datasets used during each phase of the process. This ensures that, down the line, when models are re-evaluated or retrained, the same datasets can be referenced, tested, or reused. It allows for better comparison of results, understanding of changes, and most importantly, ensuring that the results are reproducible by others. As a best practice for modern ML systems, I recommend that all ML practitioners worldwide log their data during training.

Data Yesterday vs. Data Today?

In this context, MLflow plays a pivotal role by offering a suite of tools that help streamline these MLOps practices. One of the key components of MLflow is its mlflow.data module, which provides the ability to log datasets as part of machine learning experiments. The mlflow.data module ensures that datasets are properly documented, their metadata is tracked, and they can be retrieved and reused in future runs. This helps prevent common problems like "data drift" where models start to perform worse because of subtle changes in the underlying data. MLflow’s ability to track datasets alongside models and parameters ensures that any new experiment can reliably compare results against previous runs using the exact same data, providing transparency and clarity.

Photo by Brian McGowan on Unsplash

Overview

This article will guide you through the best practices of logging a dataset in MLflow using the California Housing Dataset as an example. We’ll explore mlflow.data interfaces and demonstrate how you can track and manage your dataset during an ML experiment.

Imagine you are a data scientist working on a project to predict housing prices in California based on various features such as median income, population, and location. You’ve spent hours curating a dataset from multiple sources, cleaning it, and ensuring it’s ready for training. Now you’re ready to run your machine learning experiments. Logging your dataset at this stage is crucial because it serves as a snapshot of the exact data used in this specific training run. Should you need to revisit the experiment months later — perhaps to improve the model, tune hyperparameters, or audit the results — you want to ensure that you are using the same dataset to maintain consistency and comparability. Without logging your dataset, it’s easy to lose track of which version of the data was used, especially if the data is updated or changed over time.

Without further ado, let’s get started!

Set Up

Setting up an MLflow server locally is straightforward. Use the following command:

mlflow server --host 127.0.0.1 --port 8080

Then set the tracking URI.

mlflow.set_tracking_uri("http://127.0.0.1:8080")

For more advanced configurations, refer to the MLflow documentation.

The California housing dataset

Photo by Robert Bye on Unsplash

For this article, we are using the California housing dataset (CC BY license). However, you can apply the same principles to log and track any dataset of your choice.

For more information on the California housing dataset, refer to this doc.

Dataset and DatasetSource

mlflow.data.dataset.Dataset

Before diving into dataset logging, evaluation, and retrieval, it’s important to understand the concept of datasets in MLflow. MLflow provides the mlflow.data.dataset.Dataset object, which represents datasets used in with MLflow Tracking.

class mlflow.data.dataset.Dataset(source: mlflow.data.dataset_source.DatasetSource, name: Optional[str] = None, digest: Optional[str] = None)

This object comes with key properties:

  • A required parameter, source (the data source of your dataset as mlflow.data.dataset_source.DatasetSource object)
  • digest (fingerprint for your dataset) and name (name for your dataset), which can be set via parameters.
  • schema and profile to describe the dataset’s structure and statistical properties.
  • Information about the dataset’s source, such as its storage location.

You can easily convert the dataset into a dictionary using to_dict() or a JSON string using to_json().

Support for Popular Dataset Formats

MLflow makes it easy to work with various types of datasets through specialized classes that extend the core mlflow.data.dataset.Dataset. At the time of writing this article, here are some of the notable dataset classes supported by MLflow:

  • pandas: mlflow.data.pandas_dataset.PandasDataset
  • NumPy: mlflow.data.numpy_dataset.NumpyDataset
  • Spark: mlflow.data.spark_dataset.SparkDataset
  • Hugging Face: mlflow.data.huggingface_dataset.HuggingFaceDataset
  • TensorFlow: mlflow.data.tensorflow_dataset.TensorFlowDataset
  • Evaluation Datasets: mlflow.data.evaluation_dataset.EvaluationDataset

All these classes come with a convenient mlflow.data.from_* API for loading datasets directly into MLflow. This makes it easy to construct and manage datasets, regardless of their underlying format.

mlflow.data.dataset_source.DatasetSource

The mlflow.data.dataset.DatasetSource class is used to represent the origin of the dataset in MLflow. When creating a mlflow.data.dataset.Dataset object, the source parameter can be specified either as a string (e.g., a file path or URL) or as an instance of the mlflow.data.dataset.DatasetSource class.

class mlflow.data.dataset_source.DatasetSource

If a string is provided as the source, MLflow internally calls the resolve_dataset_source function. This function iterates through a predefined list of data sources and DatasetSource classes to determine the most appropriate source type. However, MLflow's ability to accurately resolve the dataset's source is limited, especially when the candidate_sources argument (a list of potential sources) is set to None, which is the default.

In cases where the DatasetSource class cannot resolve the raw source, an MLflow exception is raised. For best practices, I recommend explicitly create and use an instance of the mlflow.data.dataset.DatasetSource class when defining the dataset's origin.

  • class HTTPDatasetSource(DatasetSource)
  • class DeltaDatasetSource(DatasetSource)
  • class FileSystemDatasetSource(DatasetSource)
  • class HuggingFaceDatasetSource(DatasetSource)
  • class SparkDatasetSource(DatasetSource)

Get an email whenever Jack Chang publishes.

Logging datasets with mlflow.log_input() API

Photo by Claudio Schwarz on Unsplash

One of the most straightforward ways to log datasets in MLflow is through the mlflow.log_input() API. This allows you to log datasets in any format that is compatible with mlflow.data.dataset.Dataset, which can be extremely helpful when managing large-scale experiments.

Step-by-Step Guide

First, let’s fetch the California Housing dataset and convert it into a pandas.DataFrame for easier manipulation. Here, we create a dataframe that combines both the feature data (california_data) and the target data (california_target).

california_housing = fetch_california_housing()
california_data: pd.DataFrame = pd.DataFrame(california_housing.data, columns=california_housing.feature_names)
california_target: pd.DataFrame = pd.DataFrame(california_housing.target, columns=['Target'])

california_housing_df: pd.DataFrame = pd.concat([california_data, california_target], axis=1)

To log the dataset with meaningful metadata, we define a few parameters like the data source URL, dataset name, and target column. These will provide helpful context when retrieving the dataset later.

If we look deeper in the fetch_california_housing source code, we can see the data was originated from https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.tgz.

dataset_source_url: str = 'https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.tgz'
dataset_source: DatasetSource = HTTPDatasetSource(url=dataset_source_url)
dataset_name: str = 'California Housing Dataset'
dataset_target: str = 'Target'
dataset_tags = {
'description': california_housing.DESCR,
}

Once the data and metadata are defined, we can convert the pandas.DataFrame into an mlflow.data.Dataset object.

dataset: PandasDataset = mlflow.data.from_pandas(
df=california_housing_df, source=dataset_source, targets=dataset_target, name=dataset_name
)

print(f'Dataset name: {dataset.name}')
print(f'Dataset digest: {dataset.digest}')
print(f'Dataset source: {dataset.source}')
print(f'Dataset schema: {dataset.schema}')
print(f'Dataset profile: {dataset.profile}')
print(f'Dataset targets: {dataset.targets}')
print(f'Dataset predictions: {dataset.predictions}')
print(dataset.df.head())

Example Output:

Dataset name: California Housing Dataset
Dataset digest: 55270605
Dataset source:
Dataset schema: ['MedInc': double (required), 'HouseAge': double (required), 'AveRooms': double (required), 'AveBedrms': double (required), 'Population': double (required), 'AveOccup': double (required), 'Latitude': double (required), 'Longitude': double (required), 'Target': double (required)]
Dataset profile: {'num_rows': 20640, 'num_elements': 185760}
Dataset targets: Target
Dataset predictions: None
MedInc HouseAge AveRooms AveBedrms Population AveOccup Latitude Longitude Target
0 8.3252 41.0 6.984127 1.023810 322.0 2.555556 37.88 -122.23 4.526
1 8.3014 21.0 6.238137 0.971880 2401.0 2.109842 37.86 -122.22 3.585
2 7.2574 52.0 8.288136 1.073446 496.0 2.802260 37.85 -122.24 3.521
3 5.6431 52.0 5.817352 1.073059 558.0 2.547945 37.85 -122.25 3.413
4 3.8462 52.0 6.281853 1.081081 565.0 2.181467 37.85 -122.25 3.422

Note that You can even convert the dataset to a dictionary to access additional properties like source_type:

for k,v in dataset.to_dict().items():
print(f"{k}: {v}")
name: California Housing Dataset
digest: 55270605
source: {"url": "https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.tgz"}
source_type: http
schema: {"mlflow_colspec": [{"type": "double", "name": "MedInc", "required": true}, {"type": "double", "name": "HouseAge", "required": true}, {"type": "double", "name": "AveRooms", "required": true}, {"type": "double", "name": "AveBedrms", "required": true}, {"type": "double", "name": "Population", "required": true}, {"type": "double", "name": "AveOccup", "required": true}, {"type": "double", "name": "Latitude", "required": true}, {"type": "double", "name": "Longitude", "required": true}, {"type": "double", "name": "Target", "required": true}]}
profile: {"num_rows": 20640, "num_elements": 185760}

Now that we have our dataset ready, it’s time to log it in an MLflow run. This allows us to capture the dataset’s metadata, making it part of the experiment for future reference.

with mlflow.start_run():
mlflow.log_input(dataset=dataset, context='training', tags=dataset_tags)