To some people, developers are ingenious innovative software generators. To others, they’re code hacking. Either way, the world they know is changing, and their role must evolve to take on more responsibility and be more accountable for the code and applications they create.
One of the more pressing challenges facing software development and delivery teams occurs when software is released and running in production. Deployment, release management and maintenance issues (especially in resolving problems once applications are working out in the field) are the bane of both the software production teams (the developers) and the operation teams alike. The problems are getting harder, not easier, with each technological and platform advance.
Knowing this hardship, you’d be hard pressed not to think that relationships between the developer and operations teams, called the DevOps bond, would be more in-tune to their respective requirements, shared challenges and goals, and be in general a lot more collaborative. Nothing can be further from the truth. The disconnect that exists between many development and operations teams is both legendary and ingrained.
The “throw it over the wall” attitude, a key culprit to the strained DevOps relationship, partly stems from the lack of deep and connected insight into deployed assets, process transactions and system configurations, as well as patches and management policies that exist in many production environments.
But, when all is said and done, the real culprit at the center of the breakdown in the DevOps relationship is a shameful disregard on both sides for the communication and connections that need to happen in order to understand the dynamics of an application deployed out in the field, and the impact of changes made either to the application or the field environment. A lack of knowledge and insight along with the failure to manage the expectations on both sides has resulted in time, money and other precious resources being wasted in resolving problems that arise. In truth, these are fundamental failings that underlie most of the woes of software development, delivery and the ongoing maintenance once an application or code component is deployed out in the field.
Forget what the pundits say: Despite the proliferation of Web services and modern middleware, the walls of silos are unlikely to tumble down anytime soon. It has, after all, taken a long time for them to build up. But they are becoming more porous. That’s not easy for developers or operations teams to handle.
What happens when two silos intersect? You have developer teams potentially interacting with two operations teams. Managing and controlling the handover from developers to operations, understanding what to expect, and ensuring that expectations on both sides are properly met are keys to any successful collaboration.
Virtualization can help or hinder
The pressure is on the development community as well as on the business heads who pay the bills when things go wrong. The chances of things going wrong have increased. Software is more advanced, complex and prolific. There is a divestiture of control, management and execution with the rise of outsourced services and virtualized infrastructure, architectures, and tools. Let’s also not forget the in-house/on-premise issues of ownership and control.
Added to this list are the on-demand licensing models and self-service style acquisition and implementation strategies, and you can see an altogether more complex reality ripe with tripping points. The stakes are higher with the economic downturn causing greater scrutiny of budgets.
The disconnect between the two sides was less pronounced in the early days of mainframe development. The pressure is on to regain and strengthen this bond as a result of a number of competing factors:
• The need to quickly resolve problems experienced in the field as user expectations increase, resulting in a growing intolerance for poor experience with software applications, especially if it is caused through poor deployment and implementation.
• The trend toward data center automation and transformation based on virtualization and cloud strategies and technologies. It will particularly impact the frameworks, platforms and tools chosen to build and deploy the applications that then run in such environments.
• The rise of cloud computing, or more to the point, the various “Thing as a Service” models (such as infrastructure, application, software and platform), will change the developer/operations relationship dramatically, because managing and maintaining them will require a shift of emphasis and structure in the operations teams. It will do so in ways that have yet to be fully understood as cloud computing is still evolving.
Virtualization and cloud computing is blurring the boundaries of DevOps responsibilities. On one hand, developers can directly provision or deploy to a virtualized environment, bypassing the operations team. As a result, developers require new knowledge and access to better instrumentation to ensure a level of insight that allows them to directly resolve problems that relate to application code running in production.
On the other hand, virtualization redefines the skills of IT operations. The dissipation of expertise and skills across a broader role base could potentially see a reduction in the centralized skills of traditional IT operations teams for the entire server stack and the underlying network connections. Operations people have become specialists, which has created gaps of vulnerabilities at the boundaries between the various operations roles. They require automation and more integrated tooling to manage these gaps and develop a more holistic approach to operations that reaches out to developers-turned-operators.
Automation might bridge the DevOps gap
Automation can bridge the DevOps gap as neither developers nor operators are able to manually account for every piece of software deployed, and then know how configuration changes and infrastructure patches will affect or impact software design.
But automation is hard to achieve. The connections that need to be in place, along with the ability to trace and version-stamp relationships, dependencies and configurations, explain why effective automation is so complex, difficult and expensive. So despite the drag that manual processes and configurations present, they are here to stay. The problem with manual processes is that the resources required to ensure that they repeatedly deliver the required outcome are expensive in the long run, and are even harder to manage and monitor.
This explains why, in the long run, automation will happen: to ensure compliance to rules and regulations, raise productivity, enable a high degree of transparency, and increase the speed of delivery. Automation becomes even more vital and a key requirement for tracking the deployment and configuration of assets in virtualized environments and SOA-based infrastructures.
Automation can assure a level of repeatability that, in underpinning a best practice, is more likely to deliver better software consistently.
What about tools?
Surprising as this may seem, the goals and product strategies of the software vendor communities are, for once, collectively aligned and, for the most part, in step with the needs and challenges of end-user organizations faced with software development and operations. This is an act brought on not just by altruism but also by converging necessity, because the barrier to software-driven progression and innovation is ironically a lack of software-based interaction and automation.
Repositories, such as SCM systems, store and manage the various assets, relationships and dependencies to ensure and maintain the fidelity of deployed applications and infrastructure configurations. They must provide higher representations of assets in relation to the systems, business processes and business services they serve. Doing so offers more transparent insight and understanding of the impact of any change. So, it is not just about physical representation, but also one also of logical representation and dependency of resources and systems.
Aside from a tool strategy (which is something that pretty much all the life-cycle management providers are looking), one needs to consider the patterns and behaviors that exist in organizations that present good working partnerships and relationships between operations and development.
DevOps requires end-user guidance
What will be important for any CIO, IT, development and operations manager or team going forward is understanding the dynamics and characteristics of their current handover points and policies. Once achieved, they will then need to put in place tools, systems and processes that can offer a level of confidence to ensure that they are not only well designed and developed, but they are also agreed to by all relevant parties, then rigorously enforced and controlled to ensure repeatability. Additionally, all this must be supported by a management framework that allows them to be easily configurable, made adaptable and provided with the right level of insight and feedback to resolve any problems. Simple? If only!
The line of communication between Dev and Ops needs to be clarified. Neither is well-versed in communicating what either needs to carry out their respective responsibilities. QA and testing, a group within the software delivery team, could and should help smooth the relations between the two sides.
Today, testing is seen as an extension of development rather than a core part of the deployment team. But QA and testing teams need to have a broader scope and a more active role in shaping and strengthening the DevOps relationship. Many of the management and monitoring tools are raising the profile and capability of the testing function as a key conduit between DevOps.
More importantly, the collaboration between those parties and the rest of the software delivery team needs to follow that of agile practices, where there is representation from all the relevant stakeholders at the start of any software delivery detail. This means bringing together end users, developers, operations/system managers and QA/test professionals.
Power to the automated future
The new order behind the DevOps relations is one of convergence and integration of concerns and responsibilities, and it is being repeated across the whole IT spectrum. It is directing and driving new bonds while reshaping existing relationships. At its heart is governance and wider collaboration among participating stakeholders, as well as the ability to automate and drive policy across all life cycles to ensure consistent and reliable delivery, and to manage change more effectively. It is this that brings together and aligns the strategies for application life-cycle management, ITIL/ITSM, product-line management and agile.
Harmony between software development and operations? Now that would be nice, wouldn’t it!
Bola Rotibi is research director of U.K.-based Creative Intellect Consulting, which covers software development, delivery and life-cycle management.