Planview Blog

Ihr Weg zu geschäftlicher Agilität

DevOps Teams, Enterprise Agile Planning

Premium grade fuel for software lifecycle analytics engines

Veröffentlicht By Mik Kersten
Premium grade fuel for software lifecycle analytics engines

I’ve long taken inspiration from Peter Drucker’s caution that “if you can’t measure it, you can’t manage it.” Technological progress has been punctuated by advances in measurement, ranging from Galileo’s telescope to Intel’s obsession with nanometers. Our industry is starting to go through a profound transformation in measuring how software is built, but only after we work through some big gaps in how we approach capturing, reporting and making decisions on these metrics.

Exactly 40 years have passed since Fredrick Brooks cautioned that measuring software in terms of man-months was a bad idea. Pretty much everyone I know who has read that book agrees with the premise. But pretty much everyone I know is still measuring software delivery in terms of man-months, FTEs, and equivalent cost metrics that are as misleading as Brooks predicted. Over the past year I’ve had the benefit of meeting face-to-face with IT leaders in over 100 different large organizations and having them take me through how they’re approaching the problem. The consistent theme that has arisen is that to establish meaningful measurement for software delivery, we need the following (where each layer is supported by the one below it):

1) Presentation layer

  • Report generation
  • Interactive visualization
  • Dashboards & wallboards
  • Prädiktive Analytik

2) Metrics layer

  • Business value of software delivery
  • Efficiency of software delivery

3) Data storage layer

  • Historical cross-tool data store
  • Data warehouse or data mart

4) Integration infrastructure layer

  • Normalized stream of data and schema updates from each tool

 

Nothing overly surprising there, but what’s interesting is why existing approaches have not supported getting the right kind of reporting and analytics in the hands of organizations doing large scale software delivery. The Presentation layer (1) is not the problem. This is a mature space filled with great solutions such as the latest offerings from Tableau and Geckoboard, as well as the myriad of hardened enterprise Business Intelligence (BI) tools. What these generic tools lack is any domain understanding of software delivery. This is where the need for innovation on the Metrics layer (2) comes in.

Efforts in establishing software delivery metrics have been around as long as software itself, but given the vendor activity around them and the advances being made on lifecycle automation and DevOps, I predict that we are about to go through an important round of innovation on this front. A neat take on new thinking on software lifecycle metrics can be seen in Insight Ventures Periodic Table of Software Development Metrics. Combining software delivery metrics with business value metrics is an even bigger opportunity, and one where the industry has barely scratched the surface.

For example, Tasktop’s most forward thinking customers are already creating their own web applications that correlate business metrics, such as sales and marketing data, with some basic software measures. A lot of innovation is left on this front, and the way that the data is manifested in the Storage layer (3) must support both the business and the software delivery metrics. The Data Storage layer (3) has a breadth of great commercial and open source options to choose from, thanks to the huge investment that vendors and VCs are making in big data.

The one that’s most appropriate depends on the existing data warehouse/mart investment that’s in place, as well as the kind of metrics that the organization is after. For example, efficiency trend metrics can lend themselves best to a time-series based MongoDB, while a relational database can suffice for compliance reports. For organizations that have already attempted to create end-to-end software lifecycle analytics, the biggest impediment to creating meaningful lifecycle metrics is clear: the Integration infrastructure layer (4). In the past, this was achieved by ETL processes, but that approach has fallen apart completely with the modern API-based tool chain. Each vendor has their massive API set, their own highly customizable schemas and process models, and standards efforts, while important, are years away from sufficiently broad adoption.

Tasktop has long had a reputation for solving some of the hardest and least glamorous problems in the software lifecycle. Our effort is completely focused on expanding what we did with Tasktop Sync to create an entirely new data integration layer with Tasktop Data. Our goal is to support any of the Data Storage, Metrics or Presentation layers provided by our partner ecosystem. There are some truly innovative activities happening on that front, ranging from HP’s Predictive ALM, to IBM’s Jazz Reporting Service, to the Agile-specific views provided by Rally Insights. We are also working with industry thought leaders such as Israel Gat and Murray Cantor to make sure that the Data integration layer that we’re creating supports the metrics and analytics that they’re innovating.

What’s very unique about our focus is that Tasktop Data is the only product that provides the normalized and unified data across your myriad of lifecycle tools (4). We are the only vendor focused entirely on the 4th layer of software lifecycle analytics, and are focusing all of our work entirely on that layer while ensuring that we support the best-of-breed solutions and frameworks in each of the layers above. In doing so, just as we work very closely with the broadest partner ecosystem of Agile/ALM/DevOps lifecycle vendors, we are looking forward to working with the leaders defining this critical and growing space of software lifecycle analytics. If you’re interested in working together on any of these elements by leveraging Tasktop Data, please get in touch!

Ähnliche Beiträge

Geschrieben von Mik Kersten

Dr. Mik Kersten begann seine Karriere als Research Scientist bei Xerox PARC, wo er die erste aspektorientierte Entwicklungsumgebung schuf. Im Rahmen seiner Doktorarbeit in Informatik an der University of British Columbia leistete er anschließend Pionierarbeit bei der Integration von Entwicklungstools mit Agile- und DevOps. Aus dieser Forschung heraus gründete Mik Kersten Tasktop. Er hat über eine Million Zeilen Open-Source-Code geschrieben, die noch heute verwendet werden, und sieben erfolgreiche Open-Source- und kommerzielle Produkte auf den Markt gebracht. Darüber hinaus war er an einigen der umfangreichsten digitalen Transformationen der Welt beteiligt. Im Rahmen dieser Arbeit erkannte er die fehlende Vernetzung zwischen Führungskräften und Technologiefachleuten. Seitdem arbeitet er an der Entwicklung neuer Tools und eines neuen Frameworks, dem Flow Framework™, um Software-Value-Stream-Netzwerke zu schaffen und die Umstellung von Projekten auf Produkte zu ermöglichen. Mik lebt mit seiner Familie in Vancouver, Kanada, und reist um die ganze Welt, um seine Vision von der Transformation der Softwareentwicklung mit anderen zu teilen. Zudem ist er der Autor von Project to Product, einem Buch, das IT-Organisationen hilft, im Software-Zeitalter zu bestehen und zu wachsen.