The new software development life cycle means working out ways to adapt the SDLC for your machine learning workflow and teams. With data scientists currently spending large chunks of their time on infrastructure and process instead of building models, finding ways to enable the SDLC to work effectively with machine learning is critical for not only the productivity (and job satisfaction) of your data scientists, but your entire development team.
But this comes with challenges. ML introduces patterns and technology issues that are not addressed by SDLC. To manage this, we need to both adapt the SDLC and address cultural differences between data scientists and other developers.
It’s important to remember that the field of ML is still developing and, therefore, non-uniform. Data science is more of an art than it is a standard software development and very much a research-based task. Conversely, standard software developers tend to adapt their techniques to the job in hand and conform to their environment. For example, learning another language when they get a new job because that’s the language used by most of the architecture in-house. ML tasks, on the other hand, are often specific to a language or a set of frameworks, so they use whatever’s best for the job, making for a much more heterogeneous environment.
RELATED CONTENT: Implications and practical applications for AI and ML in embedded systems
Take a model where ML developers might use Python NLTK at a particular version level, yet for other tasks they use R with TensorFlow with GPU acceleration, and so on. This means a model that needs to go into production when most standard serving software doesn’t run R at all — it’s a language that the DevOps people have never encountered, so they need a way to adapt their serving workflow to accommodate these more heterogeneous environments.
Another area where data scientists and DevOps teams don’t align is monitoring and optimization. In the ML world, testing tends to only be in the process of developing the model, not once it’s in production on a server somewhere. For a standard developer, however, they’re thinking not only about whether they have made the initial component right, but also whether by the time the world is using it, can it be continuously verified and still give the expected results?
A second issue here is DevOps people not knowing how to monitor a model. They’re not used to considering model drift or probabilistic results, so they might test an ML model and find the results slightly different each time, which may lead them to think the model is failing. A data scientist would know, however, that there needs to be a 10 percent allowance in the results.
Predictability is also a challenge. The SDLC functions with predictable, scheduled releases, whereas data science cycles are erratic and unpredictable — where software developers tend to set up in two-week schedules and plan out their work ahead of time, researchers tend to work in very abstract timelines where something might take a day or two months.
Cloud environments are another area for consideration. For developers, who are primarily writing code, there are a lot of adjuncts that happen — you need to be able to set up a server and set up and connect to a database, and these are usually managed in a cloud infrastructure. But data scientists aren’t used to that sort of workflow; they tend to have everything self-contained on their laptops or perhaps via a managed service. They’re also used to training and testing in self-managed environments, and have very likely not worked with DevOps before. It’s a considerable learning curve for them, and often a confusing one that involves unfamiliar jargon they have to decipher in order to communicate with IT staff about their work.
On the flip side, DevOps teams are simply not used to considering ML-specific needs or allowing for nonstandard deployments. Plus they expect people who are writing code — data scientists — to know how to configure a server properly or how to configure authentication properly. But those expectations are unmet; the IT side of the house will send over something that to them seems obvious, but the ML side of the house may be confused by it.
These are important challenges, but there are ways to manage them. Utilizing tools to create isolation layers can make this whole process easier. Rather than trying to take your ML model and drop it into whatever already exists for IT infrastructure even when it doesn’t fit, consider a tool that can help you create an interface that requires little adaptation of either side of the puzzle. For developers, rather than having to incorporate different code into the code base, they can direct their code at the tool and it will pass through and work directly there. For the ML team, it can containerize what they’re doing without requiring them to learn a heavy set of new tools.
To manage cultural differences, have the two teams take some time to understand what each other does and become more adaptive in their actions. Expect the data science team to have its own workflow, and accommodate that, but create a defined interface between the two teams and allow each team to use the tools and methodologies that work best internally to maximize their individual productivity.
Ultimately, don’t be limited or constrained by what you perceive the SDLC to be — simply adapt it to fit. Allow teams independence and flexibility so they can be as productive as possible, and adopt tools and techniques that will enable this.