Run ML model on live data to build interactive reports

Subscribe to our newsletter and never miss any upcoming articles

Using ML models to predict and solve business use cases currently is a very lengthy and iterative process. Data scientists and engineers need to tackle a lot of challenges continuously, from building and improving the models to deploying them and using them in production applications for tangible utility. A significant amount of time and energy is spent on development, deployment, and maintenance which otherwise could be better used in training and improving the models.

We are today releasing two new features in dstack that we believe is a first step for us to improve the situation.

  1. Push and pull models directly from Python to build a model registry
  2. Schedule jobs written in Python to run models against live data

dstack workflow.jpeg

Here’s the description of the steps shown in the diagram:

  1. The data scientist pushes the trained model to the dstack’s model registry from a Jupyter notebook, a script, or any ML Ops solution.
  2. The data scientist creates a job on the dstack server.
  3. The job pulls the model from the dstack’s model registry.
  4. The job loads the data from a data warehouse, and pass the data to the model.
  5. The job makes plots and pushes them into dstack as reports.
  6. The business user accesses the report on the dstack’s server.

These new features combined with the existing ability to build interactive data reports with Python and other open-source libraries (see open-source announcement ) will simplify the creation of interactive data applications for business-specific use cases (e.g., modeling, pricing, revenue prediction, controlling KPIs, manufacturing issues and others.)

Push and pull ML models to dstack Model Registry

The ability to push and pull models to dstack Model Registry with the Python API enables one to decouple model development from application development. You can now host your models within the dstack Model Registry in the same way that you host your docker images in the docker registry.

It’s as simple as training the model and pushing it to dstack to then pull it later from the dstack server. This provides an idiomatic approach for versioning and managing different instances of your model. In addition, this enables you to isolate model creation from model consumption. Currently, dstack supports models made with Tensorflow, PyTorch, and Scikit-learn.

Let’s say you want to build an application to predict the churn of your customers to be used by sales. In this toy example, let’s build this model, and push this model to dstack Model Registry. We start by importing the libraries required - scikit-learn and pandas to create ML models, and dstack to push/pull the model.

#load data from a data source 
# ...

#prepare data here, get X_train, X_test and y_train 

model = LogisticRegression() # using sklearn.linear_model.LogisticRegression or any other model from sklearn, Tensorflow, or PyTorch
result =, y_train)

prediction_test = model.predict(X_test)

Once your model is trained and ready, you can push it to dstack Model Registry simply using the ds.push function.

import dstack as ds

ds.push("churn_demo/lr_model", model, “Some description of your model”)

The first argument of the function ds.push is the name of the stack which stores the ML model in the dstack server.

Notice how we used the ds.push function to push the ML model churn_demo/lr_model and store it in dstack Model Registry. As you continue to work on the model, you can also repeatedly push the improved model to dstack which will track all versions of the model.

If you want to retrieve this model later on – to share it with others or to use it in a Python application, you can pull it back from dstack the following way:

my_model = ds.pull("churn_demo/lr_model")

In this case, you’ll get the same instance of the LinearRegression model which you pushed to Model Registry. Once you’ve pulled the model, you can use it right away.

You can also pull a specific revision of the model using the revision ID or a parameter value: e.g.,

my_model = ds.pull("churn_demo/lr_model", params={"someParam": someValue, "someOtherParam": someOtherValue})

The model itself is stored on the dstack server in the format based on the type and the framework used to create the model. This can be either weights or pickled objects. You can control the API if both options are available. Here is how the pushed model looks like . You can also find the screenshot of the Model Registry below.

Model registry.png

Schedule jobs to run ML models on live data

Now, imagine that you want to call your model regularly on the enterprise data – e.g. to build an interactive data report. In this case, you can create a dstack Job to run this code at a regular schedule. To do this, you can go to the “Jobs” tab in your dstack server application.

jobs screen shot.png

You’ll see a light code editor where you can post the code of your job. The job will query the enterprise data (e.g. from a data source, or a data warehouse), pull the ML model (by its name from the Model Registry), do prediction, pass it to a visualization library, and then push the resulted visualization or data to dstack to make an interactive report.

In the job, you can pull your model to use together with data and visualization libraries.

loaded_model = ds.pull("churn_demo/lr_model")

Before passing the data to the model, you might filter and transform the data to make sure that the relevant data is used to build the interactive report.

# load data from a data source
# ...

# prepare data, assign to X

y_pred = loaded_model.predict(X)

Also within your Jobs, you can use the dstack library to build an interactive report. The dstack library allows you to pass parameters with every visualization or data frame. Once pushed, multiple visualizations and data frames within a single stack, are automatically compiled into an interactive application where the user may change parameters and see the report updating accordingly.

import dstack as ds

frame = ds.frame("churn_demo/app")

# for every region

frame.add(y1_pred, params={ "Region": reg, "Month": mon,
                      "Churn Prediction": })
# params is used to parametrize the interactive report, so the user may select the region


In our toy example, we use the following dstack’s functions above.

  1. ds.frame defines the name of the application, in this case, churn_demo/app.
  2. frame.add offers the UI rules for the application. You can see how parameters like ‘Region’, and ‘Month’ are passed to the frontend to create interactivity in the report.
  3. frame.push pushes the frame to the server where the report is rendered.

Notice how a function was added to the frame to define a tab “Churn Prediction”. This function enables you to have a multi-page report.

For the sake of this simple example, let us assume that in our report apart from “Churn Prediction”, we also want to have another tab with a visualization, e.g. of the number of licenses purchased by a selected customer per recent month. To have it, we’ll create a plot, define a new tab using, and then add it as an attachment within the same frame, and push the frame to dstack.

# for every company

# make a figure using any your favorite visualization library (such as plotly, matplotlib, bokeh, etc)

frame.add(fig, params={“Company”: company, “Licenses”})


Once you wrote the code of your job, you can now configure how regularly you want to update the report.

unscheduled job.png

With this, you can start the job by pressing on the Run button that is available on the top-right hand corner of your screen. Once the job has finished, you can see the report it has pushed in your stacks.


If you open your report churn_demo/app from our example, it will look like the following:


The user can use UI controls compiled out of the parameters (used in frame.push) to play with the report switch between tabs.


The dstack may help to use ML models and regular jobs to predict customer churn, lifetime value of a customer, dynamic pricing, identifying industrial issues, and many more.

Here is the application built for our toy example to predict the churn rate of some imaginary customers using synthetic data.

Note, even though the links above refer to the hosted server, anyone can use dstack to run locally or on dedicated servers. The dstack framework is open-sourced under Apache 2.0.

You can also check the documentation with tutorials and guides on ML models, jobs, and reports work.

The details on the installation of dstack library and running the server can be found at For the open-source users, the server will open in your default web browser where you can push and manage your ML models and interactive web applications, and run scheduled jobs.

Please share your thoughts and questions in our discord channel.

This image in the cover was created by Scriberia for The Turing Way community and is used under a CC-BY license.

No Comments Yet