A trend we’ve been tracking for several years now is how the data science profession has steered away from being entirely independent, do-it-all unicorns into a more specialized work. It’s by no means that individuals with deep knowledge in several domains have disappeared, but rather, the need for data science has grown, and teams have increased in headcount. In larger groups and overall in a more active job market, there’s more room for specialization.
It’s not just that there are more cooks in the kitchen, but also machine learning solutions are much more ambitious in scope.
Model deployment is simply the engineering task of exposing an ML model to real use. The term is often used quite synonymously with making a model available via real-time APIs. Still, we should think about model deployment more broadly — as online inference in the cloud isn’t always a necessary or even a desirable solution.
This article will walk through the key considerations in model deployment and what it means in different contexts. In the end, we’ll show examples of two use cases in our MLOps platform, Valohai.
How can MLOps make consultant-client relationships more productive?
There’s no doubt, AI and machine learning have come to stay. Most companies who are building software have realized this. This realization is not limited to companies developing technology-first software products, but rather it extends to the vast majority of internal applications and background systems. Teams are figuring out what problems would be better solved with machine learning rather than fixed, pre-defined logic.
However, getting into machine learning for many organizations means founding new teams and hiring suitable leadership as data science isn’t an established function for many outside the Fortune 500…
Henrik Skogström / November 17, 2020
As you start incorporating machine learning models into your end-user applications, the question comes up: “When is the model good enough to deploy?”
There simply is no single right answer.
There is no clear-cut measure of when a machine learning model is ready to be put into production, but there are a set of thought experiments that you should go through for each new model.
When you are trying to decide if a machine learning model is ready for deployment, it is helpful to circle back to the algorithm’s original goal. Are you trying…
Henrik Skogström / October 26, 2020
MLOps is a set of best practices that revolve around making machine learning in production more seamless. The purpose is to bridge the gap between experimentation and production with key principles to make machine learning reproducible, collaborative, and continuous.
MLOps is not dependent on a single technology or platform. However, technologies play a significant role in practical implementations, similarly to how adopting Scrum often culminates in setting up and onboarding the whole team to e.g. JIRA.
To make it easier to consider what tools your organization could use to adopt MLOps, we’ve made a…
Machine learning and artificial intelligence allow businesses to gain new insights and improve their business processes. However, they expose companies to additional risks because humans do not explicitly program the algorithms.
There are regulatory, reputational, and ethical risks involved, which set a high standard of minimum performance for machine learning in the real world.
Let’s look at some of these risks and how data scientists and compliance officers can help mitigate them.
Machine learning is a type of artificial intelligence that uses computer-driven algorithms to learn from data to detect similar trends and patterns in the future. These insights help…
At Valohai I lead the growth team. My mission is to ensure that no company tries to reinvent the wheel & waste their resources building their own MLOps tooling.