“If you measure anything, measure wait time.” Dominica DeGrandis, Making Work Visible
When it comes to improving the end-to-end flow of business value across your software delivery organization, one of the first places to start is identifying your wait states. By pinpointing where waste and wait time are holding up your value delivery, you can determine your Flow Efficiency, which measures the actual time spent on a flow item (features, defects, debt, risk) as a percentage of the total Flow Time. If your Flow Efficiency is low, it’s an indication of waste – that items are stagnating in a wait state for some reason or another. The problem is, most software delivery organizations lack a consistent definition of wait time and states. Flow state modeling can provide you with a consolidated business view of active and wait time to help you measure flow across multiple team workflows in the value stream.
Wait State Variability
As I outlined in last week’s blog Overcoming Data Fragmentation across Multiple Tools to Measure End-to-End, a typical software delivery organization comprises a complex network of multidisciplinary teams who use different specialist tools to plan, build, deliver and support software products. These various tools—like Jira, Planview, TargetProcess and ServiceNow—provide a number of ways to determine the state of work. Tools like Jira and RTC use tightly controlled status transitions for development work through state transition workflows. On the other end of the spectrum, tools like Trello and LeanKit use more loosely coupled methods to track the current state and blocks through lanes and flags. Most application lifecycle management (ALM) tools are likely to be somewhere in between.
All this variability means that identifying where work is waiting can vary drastically even within the same organization. It is not uncommon for the teams using the same tool (such as Jira) to interpret the statuses differently. As an example, one financial services customer that we work with had Value Stream A using a status “In UAT” to flag work being tested by the testers (active state) vs. Value Stream B using the same “In UAT” status to flag that work is waiting to be tested (wait state).
Through working with customers to implement the Flow Framework® to get their initial Flow Metrics, one of our biggest realizations is that there are not enough wait states built into workflows within enterprises, leading to ambiguous interpretations of states like “In UAT” in the above example. This leads to inaccurate (and often over-optimistic) calculations for Flow Efficiency.
Another common impediment is the lack of psychological safety to actually flag work as “waiting” on another team, group, or the business. The latter is particularly tough as it may be construed as passing the buck. Consequently, extracting overall wait time from request to delivery to identify bottlenecks is near impossible, posing a big challenge to calculate wait time from overall Flow Time to identify impediments to flow. Extrapolate that to all the various project areas in tools and the number of software delivery teams that use each tool, and factor the variability across those tools, you can quickly see how tracking overall active and wait states can become extremely complex to measure.
https://blog.tasktop.com/blog/3-roadblocks-to-measuring-the-flow/
One Workflow to Rule Them All?
The obvious impulse is to strive for a standard workflow across the whole organization by setting up common status rules, common estimation models and definitions of “done”. In fact, we have seen a few small organizations succeeding to adopt such an approach. However, even in small organizations, this is difficult to sustain. As team size grows—typically in factors of Dunbar Number—and/or there is turnover in the team, maintaining uniformity becomes increasingly difficult.
At large enterprises, there is the danger that workflow standardization will starve teams of the freedom they need to do their work. I have certainly been on the receiving end of this a few times in my career and it has sucked productivity and creativity on each occasion. To allow for some exceptions, organizations tend to provide a process to deviate from the standard but these approvals tend to be long-winded. It can foster a “let’s do it now and ask for forgiveness later” culture, which leads to large amounts of unsupported workflows that are left for IT to manage after the fact. One organization we worked with called this phenomenon “Shadow IT”, referring to the IT tools and processes people developed outside of the approved process.
Flow State Modelling
Team autonomy extends to using tools to suit the purpose (development, testing, project management, etc.). A workflow should be team and value stream specific and should offer the teams doing the work the freedom to adapt it to their context and needs. As contrary as it sounds, the variability fosters efficiency as it gives teams the autonomy and freedom to adapt workflows to their work to better serve their customers/business.
In order to measure flow and surface Flow Metrics to the business, a consolidated view of active and wait time is more important than the individual states themselves. Ideally, we want to provide the teams with the freedom to develop their own workflows and abstract out the details to elevate them to metrics at the business level. Dr. Mik Kersten’s Flow Framework provides a way around this conundrum through abstraction of flow states into just four levels (New, Active, Waiting, Done):
Modeling of individual team statuses to these flow states provides a unified way of measuring value across the portfolio. This is more robust as it allows for any changes needed at a team level while providing a consistent measure of flow across the enterprise.
In the next blog, we’ll look at how you can still find a single measure of value across different frameworks and ways of working.
https://blog.tasktop.com/blog/overcoming-data-fragmentation/