You code, you test, you ship and you maintain
This workshop addresses one of the most common pain points we have come across with data scientists at many organizations : last-mile delivery of data science applications - moving data science solutions to production.
A lot of materials are available on how to do machine learning (including the authors of this workshop) - but hardly any cover how to put them in production and how to continue updating the model.
The attendees would learn how to build a seamless end-to-end data driven application - data ingestion, exploration, machine learning, RESTful API, dashboard, and making it repeatable - to solve a business prediction problem and present it to their clients.
“Jack of all trades, master of none, though oft times better than master of one”
One of the common pain points that we have come across in organizations is the last-mile delivery of data science applications. There are two common delivery vehicles of data products – dashboards and APIs.
More often than not, machine learning practitioners find it hard to deploy their work in production and full stack developers find it hard to incorporate machine learning models in their pipeline.
To be able to successfully do a data science-driven product/application, it requires one to have a basic understanding of machine learning, server-side programming and front-end application.
In this workshop, one would learn how to build a seamless end-to-end data driven application – Starting from data ingestion, data exploration, creating a simple machine learning model, exposing the output as a RESTful API and deploying the dashboard as a web application – to solve a business problem. The attendees would then learn how to make this process repeatable and automated - how to set up data pipelines and how to handle updates to data by updating models and/or dashboard.