Creating integrations is hard, but testing them is even harder. Every web service API has its own vocabulary, semantics, nuances, and bugs. Every release of a web service potentially involves breaking changes, syntactic and behavioral. When these web services are controlled by 3rd parties, it gets even harder. As creating integrations is our business, we set out to improve how we create integrations; to create a scalable, reliable method of creating high quality integrations taking into account the continuously shifting landscape of integration end-points. Here are the hard-won best practices that weve developed which you can apply to your integrations.
The Integration Factory
We created an Integration Factory. Factory is a great name for it since it involves a highly repeatable process building lots of things that are the same – but it begs the question, what is an Integration Factory? Though it shares some similarities to software factories, especially in the use of manufacturing techniques, it doesnt apply the model-driven code generation techniques frequently associated with the term. Integration Factory is really just a fancy term for:
- The Integration Specification (or the spec)
- The TCK (Technology Compatibility Kit)
- Connectors, which are implementations of the specification (these are the integrations)
- A build and test environment, which enables testing integrations against 3rd party systems
- Reporting, which provides an incredible level of detail on correctness and TCK conformance
- A delivery process for evolving the spec, TCK and connectors
- Continuous Integration
- Code reviews
- Build triggers
Its a set of technologies, an approach, a methodology, a repeatable process for creating robust, high quality integrations.
Connectors
Having a common API for creating integrations is an essential step in creating a factory. Tasktop Sync uses Mylyn Tasks as the API for integrations. Mylyn Tasks is a fantastic API and framework developed originally for IDE integrations, enabling developers to bring ALM artifacts (tasks, bugs, change requests, requirements, etc.) into their IDE. At the core of this API is a common data model for ALM artifacts and an API for performing basic CRUD and search operations. We call implementations of this API connectors. We have lots of connectors; one for each of Atlassian JIRA, Microsoft TFS, IBM RTC, IBM RRC, HP Quality Center and HP ALM, CA Clarity, etc.
Figure 1: a typical Mylyn connector as it relates to API
While a common data model and API enables us to connect any end-point with any other end-point when synchronizing ALM artifacts, the API on its own is not enough. We need to know that each connector implements the API correctly. We need to know the capabilities of the connector and of the web service, and how theyre different for each version of the web service, and if it changes as new versions of the web service are released. We need to know what works and what doesnt and why; is it a shortcoming of the connector implementation, or a limitation of the web service? This leads us to the Connector TCK.
Connector TCK
During one of our innovation-oriented engineering Ship-It days, one of our engineers prototyped a set of generic tests that could be configured to run against any connector. Why not apply the concept of a TCK to connectors? Benjamin dubbed his creation the Connector TCK, and the name stuck. The Connector TCK would have tests that ensure that every connector is implemented correctly, and tests the capabilities of each implementation.
Figure 2: the Connector TCK
The kinds of tests that were added to the Connector TCK varied from the most basic (e.g. a connection can be established with a repository) to more detailed (e.g. a file attachment to an artifact can be created and retrieved correctly with non-ASCII characters in the attachment file name). The beauty of the Connector TCK is that it could be used to measure the quality and capabilities of every connector equally. It could be configured to run a connector against multiple versions of a repository, in fact we test as many versions as we believe necessary to ensure correct behaviour for any supported version of an integration end-point.
Figure 3: testing a connector with multiple versions of a repository
Having a Connector TCK is great – but weve missed the essential question: what tests should be in it? The only way to know for sure is to have a definitive contract, a specification.
A Specification
For some software engineers requirements arent glamorous, exciting or even really that interesting. When we sit in front of a keyboard, the first thing we want to do is to start hammering out code. This is analogous to framing a house without blueprints. Sure, its fun – but the house wont be what we want in the end. The Integration Specification (the spec) is the blueprint that spells out the desired behaviour of integrations. The spec takes the following form:
- User Stories (US) – stories written from the users perspective that define the functionality of integrations
- Technical User Stories (TUS)- stories written from the technology perspective that map to the connector API
- Acceptance Criteria (AC) – criteria that must be satisfied in order for a technical user story to be considered complete
Heres an example from the spec:
- US-2: Connector client can set up a connection to a repository
- TUS-2.1: Connector client can establish a connection with the repository server given the URL, credentials and other necessary connection parameters
- AC-2.1.1: Connector client can validate URL, credentials and other necessary connection parameters and receive feedback of successful connection
- AC-2.1.2: Connector client receives meaningful feedback for invalid or missing URL
- AC-2.1.3: Connector client receives meaningful feedback for invalid or missing credentials
- AC-2.1.4: Connector client…
- TUS-2.2: Connector client can…
- TUS-2.1: Connector client can establish a connection with the repository server given the URL, credentials and other necessary connection parameters
Normal software development often involves building features to a spec (or without a spec) and moving on. In our case where were building many integrations that essentially do the same thing, we get a lot of mileage out of the spec. TUSs and ACs in the spec apply to every connector implementation, of which there are many. So we value the spec with a kind of reverence that is unusual for software engineers.
Pulling It Together
The magic in this process really comes to light when we pull it together. Using JUnit and its powerful TestRule
concept, we are able to connect our tests with ACs from the spec using a simple annotation:
@Retention(RUNTIME)
@Target(METHOD)
public @interface Validates {
/**
* Provides the IDs of the acceptance criteria.
*/
String[] value();
}
Heres an example of the annotation in use:
@Test
@Validates("2.1.2")
public void testMissingUrl() {
// test it
}
With this simple technique, we can report on test results within the context of the specification. The test report takes on a whole new significance: its now a report on TCK compliance and connector capabilities. We can now definitively say which features are working and which are not for any integration, and easily determine differences when testing against new versions of a web service.
Figure 4: TCK compliance reporting
What Comes Next?
In the API economy its hard, but possible to create high quality integrations. Weve taken a look at some of the concepts behind an Integration Factory which make it a lot easier. In the next installment well look at other aspects of an Integration Factory, including build and test environments and the delivery process.