Ive long taken inspiration from Peter Druckers caution that if you cant measure it, you cant manage it. Technological progress has been punctuated by advances in measurement, ranging from Galileos telescope to Intels obsession with nanometers. Our industry is starting to go through a profound transformation in measuring how software is built, but only after we work through some big gaps in how we approach capturing, reporting and making decisions on these metrics.
Exactly 40 years have passed since Fredrick Brooks cautioned that measuring software in terms of man-months was a bad idea. Pretty much everyone I know who has read that book agrees with the premise. But pretty much everyone I know is still measuring software delivery in terms of man-months, FTEs, and equivalent cost metrics that are as misleading as Brooks predicted. Over the past year Ive had the benefit of meeting face-to-face with IT leaders in over 100 different large organizations and having them take me through how theyre approaching the problem. The consistent theme that has arisen is that to establish meaningful measurement for software delivery, we need the following (where each layer is supported by the one below it):
1) Presentation layer
- Report generation
- Interactive visualization
- Dashboards & wallboards
- Analyse prédictive
2) Metrics layer
- Business value of software delivery
- Efficiency of software delivery
3) Data storage layer
- Historical cross-tool data store
- Data warehouse or data mart
4) Integration infrastructure layer
- Normalized stream of data and schema updates from each tool
Nothing overly surprising there, but whats interesting is why existing approaches have not supported getting the right kind of reporting and analytics in the hands of organizations doing large scale software delivery. The Presentation layer (1) is not the problem. This is a mature space filled with great solutions such as the latest offerings from Tableau and Geckoboard, as well as the myriad of hardened enterprise Business Intelligence (BI) tools. What these generic tools lack is any domain understanding of software delivery. This is where the need for innovation on the Metrics layer (2) comes in.
Efforts in establishing software delivery metrics have been around as long as software itself, but given the vendor activity around them and the advances being made on lifecycle automation and DevOps, I predict that we are about to go through an important round of innovation on this front. A neat take on new thinking on software lifecycle metrics can be seen in Insight Ventures Periodic Table of Software Development Metrics. Combining software delivery metrics with business value metrics is an even bigger opportunity, and one where the industry has barely scratched the surface.
For example, Tasktops most forward thinking customers are already creating their own web applications that correlate business metrics, such as sales and marketing data, with some basic software measures. A lot of innovation is left on this front, and the way that the data is manifested in the Storage layer (3) must support both the business and the software delivery metrics. The Data Storage layer (3) has a breadth of great commercial and open source options to choose from, thanks to the huge investment that vendors and VCs are making in big data.
The one thats most appropriate depends on the existing data warehouse/mart investment thats in place, as well as the kind of metrics that the organization is after. For example, efficiency trend metrics can lend themselves best to a time-series based MongoDB, while a relational database can suffice for compliance reports. For organizations that have already attempted to create end-to-end software lifecycle analytics, the biggest impediment to creating meaningful lifecycle metrics is clear: the Integration infrastructure layer (4). In the past, this was achieved by ETL processes, but that approach has fallen apart completely with the modern API-based tool chain. Each vendor has their massive API set, their own highly customizable schemas and process models, and standards efforts, while important, are years away from sufficiently broad adoption.
Tasktop has long had a reputation for solving some of the hardest and least glamorous problems in the software lifecycle. Our effort is completely focused on expanding what we did with Tasktop Sync to create an entirely new data integration layer with Tasktop Data. Our goal is to support any of the Data Storage, Metrics or Presentation layers provided by our partner ecosystem. There are some truly innovative activities happening on that front, ranging from HPs Predictive ALM, to IBMs Jazz Reporting Service, to the Agile-specific views provided by Rally Insights. We are also working with industry thought leaders such as Israel Gat and Murray Cantor to make sure that the Data integration layer that were creating supports the metrics and analytics that theyre innovating.
Whats very unique about our focus is that Tasktop Data is the only product that provides the normalized and unified data across your myriad of lifecycle tools (4). We are the only vendor focused entirely on the 4th layer of software lifecycle analytics, and are focusing all of our work entirely on that layer while ensuring that we support the best-of-breed solutions and frameworks in each of the layers above. In doing so, just as we work very closely with the broadest partner ecosystem of Agile/ALM/DevOps lifecycle vendors, we are looking forward to working with the leaders defining this critical and growing space of software lifecycle analytics. If youre interested in working together on any of these elements by leveraging Tasktop Data, please get in touch!