DevOps is doomed as term … unless it can be clearly defined.
What is Devops?
As has been noted many times, DevOps is a term that is hard to define.
DevOps is a term that is so easily misapplied in the enterprise that it popularity threatens [its own] adoption. ... If you can’t define it for the bean counters – it might as well not exist at all.
Some would say DevOps is experiencing an identity crisis setting DevOps up for failure.
DevOps can be forgotten as quickly as it arose
Thus defining DevOps is crucial to its success.
DevOps is a goal
Defining DevOps by what people are doing is almost impossible because no two DevOps teams are doing the same thing. On the other hand all DevOps teams have the same goal. That goal is clearly articulated as:
DevOps enables fast flow of features from development to IT operations to the customers.
- Gene Kim
The roots of DevOps arose directly out of the continuous delivery with the Flickr presentation in 2009 on 10 deploys per day. As Gartner puts it
DevOps — Not a Market, but a Tool-Centric Philosophy That Supports a Continuous Delivery Value Chain
Gartner nails it - DevOps is doing what it takes to a provide continuous delivery value chain. Unfortunately many people want to keep DevOps so open ended that they risk dooming the movement. The DevOps movement can succeed once there is a stake in the ground with a clear definition to rally around. DevOps is defined as the tools and culture that provide a continuous delivery value chain. From this short definition all the tools, groups required, cultural changes, processes and methods like feedback loops can be expanded upon.
How is the DevOps goal of continuous delivery value achieved ?
Now that DevOps is defined, the focal point becomes laying out the tools and processes to achieve continuous delivery value. What tools and processes are required? Which tools are the best? What are the different process flows where some tools apply and others do not. There is a frustrating lack of discussion on what are the best tools and processes required to achieve continuous delivery.
In order to increase flow in any system, i.e. continuous delivery, the only productive action to take is to improve the constraint in the system. The constraint is the slowest point in the flow. The constraints will differ from company to company depending where they are at in DevOps maturity, but finding the top constraint is the first step. Any improvement not made at the constraint is an illusion as demonstrated by the works of Eli Goldratt. Thus the first step in DevOps is to identify the top constraint(s) and optimize it.
Gene Kim who has worked with 100s of CIOs and surveyed 14,000 companies listed the top 2 constraints in the fast flow of features as
- Provisioning environments for development
- Setting up test and QA environments
The only way to optimize the fast flow is to take these constraints when they exist, and optimize them. These top 2 constraints which are most commonly seen are both centered around environment provisioning. As Gene Kim puts it:
One of the most powerful things that organizations can do is to enable development and testing to get the environments they need when they need them.
- Gene Kim
Thus the question for most companies is how to improve the provisioning of environments for development, QA and testing.
Why is environment provisioning such a bottleneck in DevOps?
Environment provisioning is such a bottleneck because development, QA and testing environments require production parity. Applications should be developed and tested on environments as close to production as possible to guarantee optimal bug free code. Getting dev/test/prod parity is difficult because provisioning consistent production like environments is a challenging problem to solve. Creating these environments can be complex because they require many packages and configuration information. Creating these environments can take a long time due to slow operations such as allocating and masking a copy of a production database for each of these environments. Environments are needed faster than they can be supplied. Developers depend on the environments to write the code. QA depends on these environments to test the code. IT depends on these environments to verify deployment processes. When environment provisioning can be done quickly, reproducibly and efficiently then one can eliminated the main obstacle to continuous delivery. Eliminating the environment provisioning constraint relies on compute virtualization, data virtualization, configuration management and automation.
Continuous Integration (CI/CD) Landscape
Thus to implement CI/CD requires efficient environment provisioning. Efficient environment provisioning depends upon tools and technology such as:
- Configuration Management: Chef, Puppet, Ansible
- Machine virtualization : VMware, KVM, Docker etc
- Data virtualization : OpenZFS, Flocker, Delphix
Once environment provisioning is streamlined then the next step is automating the flow of developer commits through QA and deployment testing with test automation tools like like Jenkins and Team City.
There are examples coming out to document how to wire these tools togethers such as a recent one on Jenkins and Ansible.
Efficient environment provisioning and test automation are just the core. From there we can build out more of the processes to attain the goals of DevOps.
The ambiguous descriptions of DevOps are undermining the movement, but DevOps is defined as the tools and culture that support a continuous delivery value chain. Now with a stake in the ground to rally around, the goal is no longer saying what is or what is not DevOps but instead becomes mapping out the best tools, processes, methods and cultural changes to support the DevOps definition and mapping these to the various situations encountered in the industry. Everyone is doing DevOps, i.e. software development, integration, QA and release, the questions is how well are you doing it and can you improve it?
This article is published as part of the IDG Contributor Network. Want to Join?