If you want to continuously increase the business value of your software portfolio, you need to focus on measuring the end-to-end flow of work across your software delivery value streams. As I highlighted last week, there are three key roadblocks to clear in order to identify where work is waiting and where your bottlenecks are:
- Data fragmentation across multiple tools
- Different work states across multiple workflows
- Multiple frameworks and measurements
In this blog, I will be talking about reviewing toolchain data fragmentation to get to the ground truth of the flow of work between the software delivery teams building and supporting your software products.
Software Delivery isn’t an Assembly Line
When you compare traditional physical manufacturing (such as car production) with software product development, the distinct differences between the two processes—and how you measure them—quickly become clear. While the software delivery process may seem linear and repeatable, in reality it’s an iterative creative process of software design with high variability, unique output and unpredictable economics.
While DevOps applies lean manufacturing principles—through automating repeatable output to improve the deployment lead time from code commit to production—the work upstream is an entirely different ball game. Each output is unique, requiring the collaboration of a different set of practitioners to design, build and test iterations to continuously meet end-user needs. This work upstream (before a release) is long, unpredictable and unmeasured.
As our CEO and founder, Dr. Mik Kersten, writes in the EE Times, software delivery is “a complex network that produces intangible assets through conversation, coding and collaboration between numerous specialists. These lines of collaboration need to provide flow, feedback and traceability”. We need to bear this variability and complexity in mind if we’re to measure the end-to-end flow of business value accurately. Looking at how work moves between systems, from customer request to delivery and back, can help us gain a better understanding of what’s going on, as well as identify gaps in the flow.
https://blog.tasktop.com/blog/a-unified-view-of-wait-states/
Multiple Tools for the Work
Let’s consider the assembly line of a car: the product starts its life as a set of raw materials such as metal, glass and other individual parts. These materials go through a network of workstations operated by different teams to be transformed into something that customers can drive on the road. Measuring the work at each station is critical to analyzing end-to-end flow, as each station holds information on the work carried out and how long it took to be completed. Studying and continuously monitoring this flow enables manufacturers to identify whether there are impediments to be removed and efficiencies to be gained.
In software delivery, however, information about the flow of value is recorded in a variety of tools used by the various teams: from Word documents, Project/Product Portfolio Management (PPM) tools like CA PPM, Agile development tools like Jira, technical tools like GitHub, and ITSM tools like ServiceNow. And unlike an assembly line, there are myriad tools available to do the same type of IT work. Just looking at the XebiaLabs Periodic Table of DevOps Tools gives you a sense of the numbers — and this only shows the tools in the downstream Release stage of the software delivery value stream, i.e. CI/CD Pipeline, code commit to release:
Furthermore, different value streams within an enterprise might use different tools for the same purpose like Jira Align, TargetProcess and CA PPM for Product and Portfolio planning, depending on what works best for their particular needs and their business unit. If work is outsourced to vendors, vendor teams are likely to use their own stack of tools to manage their work leading to gaps in the end-to-end flow. Excel, PowerPoint and emails are still very popular to manage work, especially upstream in the software delivery value stream in the Ideate stage. The problem is it’s extremely hard to gather any quantitative data from these tools.
Complicating matters even further, if an organization has gone through mergers and acquisitions (M&A) then this problem can be exacerbated by the use of multiple tools inherited during the process. For example, when I worked at Responsys, the company acquired by Oracle, all Responsys portfolio and engagement management work was recorded in Salesforce and Workfront, while other Oracle teams used a homegrown tool. This made cross-team collaboration a huge challenge until consolidated tools two years after the acquisition was completed. In our work with large enterprises, we at Tasktop routinely see instances where some business units use Jira for all their development work while another entity uses a tool like Azure DevOps.
Given that the focus of DevOps practices and metrics is on automating and streamlining the Create and Release stages, it’s more likely that the bottlenecks to flow lie further upstream with the teams in the Ideate stage who work more closely with the business. It’s work in this area of the value stream that needs to be made visible and mitigated. All this tool and data fragmentation make measuring flow across the toolchain very tricky, but not impossible.
One Stack to Rule Them All?
One strategy is to consolidate all your work into one stack of tools. Atlassian, Azure DevOps, and to an extent ServiceNow, provide a strong suite of tools that span the software delivery process from end-to-end. They allow you to track work all the way from portfolio planning to delivery and operations. Being in a single stack means they integrate well with each other through internal integrations and plugins and many have a good reporting interface to generate standard and custom metrics to track progress. This strategy works for some smaller organizations where there are a fewer number of value streams and where this approach can be implemented top-down.
At medium to large enterprises, however, forcing a single stack is almost futile due to dynamics like location, culture, type of work, legacy, and M&A entities. The latter can undo any work done to create a single stack and add to the tool diversity. Most importantly, a single stack may also starve teams of the best-of-breed tools they need to help them do the work efficiently for their value streams and slow their progress as a result. The State of the DevOps report points to highly productive teams being able to utilize tools of choice for their disciplines.
Integration: A More Agile Approach
A certain amount of tool consolidation is necessary to manage a varied tool network but not at the expense of starving teams for the best technology and tools to do their work efficiently. For most organizations, it’s a combination of tool consolidation and integration that provides a better solution in achieving the best of both worlds. By integrating tools in Ideate to Create to Release and Operate, you can trace work from initiatives across the value stream providing valuable metrics to measuring flow like Flow Velocity, Flow Efficiency, Flow Time, and Flow Load, as well as tracking work through Flow Distribution to ensure that you’re prioritizing work that creates and protects value.
While not all tools can be integrated with the same integration solution and some tools are not meant for integration (e.g. Excel, Powerpoint), integration can provide a more robust and traceable view of a fragmented tool network. Integration can also bring vendors into the fold in a way that does not impede vendor teams’ own internal process while still shedding light on state of work into the value stream.
Tool integration can enable teams to work in the tools of choice and use third-party integrators to connect tools; giving teams the freedom to focus on their work and innovate and leaving the integration infrastructure to bridge the gap between tools to visualize the end-to-end flow.
Architecting the toolchain as a product (perhaps an organization’s most important product) is key to achieving visibility across the value streams. Enterprise architecture teams can help in this regard to ensure any new tool can either be integrated or embedded in the existing tool landscape to keep the infrastructure measurable end-to-end and conducive to Product Modelling — more on this later in the series.
Can’t wait? Let’s have a chat about how you can take small steps to begin measuring what matters in software delivery today.
https://blog.tasktop.com/blog/3-roadblocks-to-measuring-the-flow/