Most software lifecycle integration (SLI) projects fail because organizations underestimate just how difficult integration is and are unaware that software development and delivery tools are not designed to integrate.
The APIs in endpoint tools were built by their vendors to create a tiered architecture and not necessarily for third party integration. The tools are built for a specific function e.g. JIRA for Agile Project Management, HPE ALM for Agile Lifecycle Management, CA PPM (Clarity) for Product and Portfolio Management and so on. They’re best-of-breed tools built to optimize their user’s capabilities in their respective roles.
By looking at connecting ‘just’ two tools, you quickly see how technical clashes between them create a litany of complications that undermine the ability for the two tools to communicate – despite this function being the bare minimum requirement of any integration.
You don’t want to just mirror one artifact in one tool in another – you want that artifact to be understood across the value chain so that all teams and tools understand the context of what they’re working with, and towards, for optimized collaboration. To do this, we must ascertain:
- How each individual tool is used and by whom
- What are the object models that represent the artifacts, projects, users, workflow etc.
- How we handle continual customizations such as changes and additions
- Clarify any restrictions, users behaviour, needs etc.
- Ascertain any future expansion/scalability objectives
Each endpoint has its own model of ‘like objects’ (such as a defect) and in theory they have the same type of data. But each tool stores and labels this data differently, and can be modified with custom attributes and permissions and with different formats and values.
For instance, the defect tracking tool may have three priority levels (1, 2, 3) but the agile planning tool may have four (critical, major, minor, trivial). They have the same understanding of the artifact, but possess no means to accurately communicate to each other. They need a multi-lingual translator.
These influences mean any synchronization between artifacts must occur between widely divergent data models, which in a way creates a ‘data shoehorning’ of sorts. You’re trying to align two concepts that don’t naturally match and this will create conflict. Or to borrow a term from electric engineering, what is known as ‘impedance mismatch’.
Impedance mismatch occurs because of the different language being used and the relationships that artifacts have with other objects. These relationships must be understood to provide context. They do not live independently of one another. Each story is relevant to an epic in a tool and there’s many chapters within that story that must be understood and communicated for tools to interoperate. We call this ‘semantic understanding’.
In seeking this information, it’s only natural to consult the tool’s APIs. And it is at this junction that we discover our first real hurdle to integration. APIs rarely provide this ‘integration essential’ information because they’re not documented for such a process – as touched on earlier, they’re created for a specific purpose by the vendor.
If there is any documentation, then it’s often vague and/or incomplete. And of course, all tools are subject to sudden upgrades and changes – especially given the rise of on-demand and SaaS applications – which will instantly undermine any integration. You can read more about why APIs are a double-edged sword for integration in our blog ‘APIs are not the keys to the integration kingdom’ next week.
Furthermore, connecting two end points is only the start. The real challenge is when you want to connect a third, fourth or fifth connection, which is what you want to be aiming to do for effective large-scale integration. It would be only natural to assume that once the ‘hard work’ of the first connection has been done, any additional integration would be a simple and iterative process. Sadly, this is not the case – there is no one proven formula. The complexity only increases:
While the learning curve isn’t as steep as the first integration, the curve doesn’t flatten as one would hope. Some of the issues that reared their ugly heads in the first integration will return. Once again you’ll have to run the technical analysis of the tool, establish how artifacts are represented, identify the similarities and reconcile the differences with other tools in your software workflow. The API information will once again be little help and there will be more unforeseen circumstances.
So how do you safeguard your software development ecosystem with a robust, steadfast integration?
The key is a solution that understands the complex science of large-scale software development integration and possesses the ‘next level’ information that APIs don’t provide. A model-based solution that provides the multi-lingual translator to ensure that all endpoints, regardless of number, can communicate with each other. If you’re investigating a solution you need to make sure it includes the following:
- Semantic understanding
- Domain expertise
- Neutral relationships with endpoint vendors that allows for deep understanding of the tools
- Testing infrastructure that ensures integrations are always working
For more information, please:
Download our eBook on ‘Why Integration Is Hard’
Speak to our dedicated team
Visit our product pages
Nous discuterons également des énormes défis liés à l'intégration du cycle de vie des logiciels le mardi, 31st janvier - lors de notre événement spécial diffusé en direct, TasktopLIVE. Vous pouvez en savoir plus sur l'événement ici.